Showing posts with label PeopleTools. Show all posts
Showing posts with label PeopleTools. Show all posts

Sunday, August 12, 2018

Windows App/Prcs Service Management

Tools 8.56.07 and HCM 9.2 installed via classic method (non-DPK)

Created one PS_CFG_HOME pointing to E:\pscfg
Under this I have 
appserv
appserv\DEV
appserv\TST
appserv\prcs\DEV
appserv\prcs\TST

So if I use PS_HOME\appserv\psadmin to create service for the above config then it will create one service for both the application servers and process schedulers. So if I need to start/stop one domain or just the app or prcs of one domain then its not possible.

The service created via psadmin runs the command
PS_HOME\bin\server\winx86\psntsrv.exe "E:\pscfg"
which basically applies to all domains under PS_CFG_HOME.

So I used Ruby which is packaged with the DPK installables to create 4 unique services so that I can start/stop each service individually. I had downloaded PUM26 DPK so used the setup folder and PT-DPK-WIN-8.56.07-1of2.zip and PT-DPK-WIN-8.56.07-2of2.zip from that location. 
Ran the following command in my CMD session as admin


psft-dpk-setup.bat --env_type midtier --domain_type appbatch

The initial part will install the puppet software, which is what we need. Quit the installation once that was done.

Created appserver_win_service.rb under each application server domain and prcs_win_service.rb under each process scheduler domain. Took the one created for PUM as a sample and then editted that accordingly.

Then create the service manually in the CDM session.

Application Server

sc create PsftAppServerDomain{name_of_domain}Service DisplayName= PsftAppServerDomain{name_of_domain}Service binpath= "C:\PROGRA~1\PUPPET~1\Puppet\sys\ruby\bin\ruby.exe E:\pscfg\appserv\{name_of_domain}\appserver_win_service.rb" group= Network

sc description PsftAppServerDomain{name_of_domain}Service "PeopleSoft AppServer Domain {name_of_domain} Service"

Process Scheduler 

sc create PsftPrcsDomain{name_of_domain}Service DisplayName= PsftPrcsDomain{name_of_domain}Service binpath= "C:\PROGRA~1\PUPPET~1\Puppet\sys\ruby\bin\ruby.exe E:\pscfg\appserv\prcs\{name_of_domain}\prcs_win_service.rb" group= Network

sc description PsftPrcsDomain{name_of_domain}Service "PeopleSoft Prcs Domain {name_of_domain} Service"


Start application server and process scheduler manually first via the classic psadmin method. Once both start cleanly shutdown both via psadmin and then attempt to start via service.

My new service executes a command like below
Path_to_ruby.exe E:\pscfg\appserv\{name_of_domain}\appserver_win_service.rb

I created the config file appserver_win_service.rb which is a text file with ruby commands which in turn uses psadmin command line to start/stop service.

So I have 4 .rb files for each service.

Saturday, June 30, 2018

Wildcard SSL Certificates (CN/SAN)

We had a need to enable SSL on all our non-production PeopleSoft environments. So decided to implement or create one SSL certificate which would be applicable for all the environments. 

Following are the details of this exercise.
I am running PT 8.56.xx on HCM 9.2 and Weblogic 12c on a Windows 2016 VM but this should work for any release. 

So in my case I have two webservers (web1.mycompany.com and web2.mycompany.com) in my non-production PeopleSoft landscape and I wish to use the same certificate on both the webservers. 

PeopleSoft delivers a wrapper called as pskeymanager.cmd (or .sh) which essentially runs the keytool java command to create a java keystore. 

When you use the wrapper to generate a new store or to generate a new certificate request it prompts for the CN (common name) value which generally is the VM name - so in my case it would be web1.mycompany.com or web2.mycompany.com. So if I generate it as web1.comycompany.com then I won't be able to use this certificate on web2 and vice-versa. So the plan is to provide the value for CN as *.mycompany.com, which is essentially a wildcard value. So any webserver in mycompany.com would be able to use this SSL certificate.

When we do this there is another attribute called as SAN (Subject Alternate Name) that needs to be populated. In my experience I noticed that a browser like Chrome uses this to verify the validity of the certificate but a browser like IE doesn't use it.

The vanilla pskeymanager.cmd doesn't use the san attribute so we have to customize this script as follows.

Search for string :gencsr and then a few lines below this would be the keytool command to generate a CSR (Certificate Signing Request). Towards the end of that command add 

-ext san=dns:web1.mycompany.com,dns:web2.mycompany.com

or

-ext san=dns:*.mycompany.com

Now use the pskeymanager.cmd command to create a new keystore and generate the CSR. Then provide the CSR to your CA (Certificate Authority) for signing and once your receive the response certificate verify that the SAN value is populated.

Load this to weblogic and test. Now you can use the same SSL certificate on all the webservers in the mycompany.com domain.

Saturday, April 21, 2018

PSCA Install and Configuration - Upgrade to 9.2

In this POC I am upgrading HCM 9.1 PT 8.53.x to HCM 9.2 PT 8.56.x

VM setup 

  1. I have done native OS installs on Windows 2016 VM of PUM Image 25 (VM name: PUM25) and Upgrade Source 03 (VM Name: UPG03) and those systems are up and running. 
  2. Next I have copy of production up on SQL Server 2014 on a Windows 2016 VM. (VM Name: DB).
  3. I have two other Windows 2016 VMs one for App/Prcs VM and the other for PIA. Native OS - DPK method install of middleware like tuxedo, weblogic and tools, app homes using the command line
  4.           psft-dpk-setup.bat --env_type midtier --deploy_only --deploy_type all

             No domains have been deployed yet
  5. Software is installed on E:/psft (E:\ is local drive on the VM)

PSCA/Oracle Client/PT Client Setup
I am running my PSCA on the App/Prcs VM mentioned above. 
Map drives to E$ on PUM25, UPG03 and \\PUM25\E$\psft\pt\hcm_pi_home

Installed SQL Server Management Studio and related drivers for SQL Server 2014. 

Opened CMD as admin and navigated to \\PUM25\E$\psft\pt\tools_client and ran SetupPTClient.bat -t

This will install Oracle client and Oracle based PT Client on the C:\ drive. This PT Client will be used to connect to PUM25 and/or UPG03.
Also installed PSCA - default location on C:\ drive.

Created 32-bit and 64-bit ODBC connections for my copy of prod database. PT 8.53 just needs 32-bit but when PSCA makes the connectivity test it uses 64-bit version.

Verified that I can connect to PUM24, UPG03 and Copy of Prod via application designer as well as the appropriate SQL tool.

PSCA Configuration
Launch PSCA as admin
Select Update Manager

General Option 
    - PS_HOME - point it to the local install ps_home
    - Create local directories to store download, output and staging and point them appropriately
    - point to SQLCMD.EXE file for SQL Query Tool
PUM Source - point to the Oracle db, SQL Client Tool would be path to sqlplus.exe, PS_HOME, PS_APP_HOME and PS_CUST_HOME would be the Oracle PT client installed in the prior step. 
Don't have to enable any check-boxes
PI_HOME - will be the mapped drive to hcm_pi_home mapped earlier
PIA URL to PUM signon page
Did not enable EM Hub

Upload Upg Source db to PUM
Launch the "Upload Target Database Information to Image" option and select the Upgrade Source database and upload that to the PUM Image.
Once done login to PUM PIA, navigate to PeopleTools > Lifecycle Tools > Update Manager > Update Manager Dashboard and Click on Define Upgrade Package.
Named my package HR91TO92.
Follow prompts select required for upgrade fixes too.
So it should create HR91TO92UPG and HR91TO92RFU. 
Now back in PSCA, Select HR91TO92UPG from the drop-down and it should prompt for downloading HR91TO92RFU too. So download both packages to the download directory.
As we will doing a tools upgrade too at the same time, copy PS_HOME\PTU\updPTU856.zip to the download folder manually.

Selected "Upgrade to New Application Release"
Initial Pass is selected by default
Setup target database - copy of prod
SQL Client tool - will be path to SQLCMD.EXE
Current home - will be the copied hr91 directory, I have one directory which has both tools and app so provided the same path for PS_HOME, PS_APP_HOME and PS_CUST_HOME
Enabled Configure New Homes for Upgrade
Provided paths for new homes the ones created by the DPK install on this VM.

Setup Upgrade Source database - this is similar to the PUM Source database configuration done earlier.

Select Upgrade page - should show Upgrade Source and target information correctly. Application and Tools upgrade sections will be selected and grayed out. Each one should have a package listed under the appropriate section. 
Select the Required for Upgrade Package section.
then continue to compatibility check screen. Verify that all are return green. If anything is not then step through previous screens to verify the inputs and selections. A total of 11 checks are performed.

Click on Finish and the upgrade job will be created and PSCA will start running the steps. The first step is a manual stop.

It creates 3 jobs, RFU, UPG and PT Upgrade.

Thursday, April 19, 2018

SQR: FindFiles

Via peoplecode its very easy to read a directory and fetch filenames - leveraging delivered function "FindFiles". Now try doing the same in SQR. Not trivial at all. 

Here is what I had to do. I am running PT 8.55.x but its pretty much the same in any recent PT versions.
I essentially have to run a dir command at the Windows level get the filenames based on the wildcard and write the filenames to a file. Then read the file for the filenames. Once done, I can delete the temp file which has the list of filenames. if the dir cmd does not return anything for my wildcard search then the temp file will be empty.
The other important thing here is the WAIT parameter for the call system command. If that is not provided then the reading of the temp file does not work, as it takes the system some time to write the contents to the temp file. 

let $comspec = getenv('COMSPEC')
let $TempFile = 'C:\temp\temp_file.txt'
let $FindFiles = $comspec || ' /c dir /b \\my_network_path\share\some_directory\*.pdf > ' ||  $TempFile
call system using $FindFiles #call-status WAIT

open $TempFile as 10 for-reading record=50 #filestat
while 1 
  read 10 into $file-name:50
if $file-name != '' or #end-file
break
end-if
end-while

close 10

! delete text file
let $delCmd = $comspec || ' /c del /F /Q ' || $TempFile 
call system using $delCmd  #call-status WAIT

Way longer than a simple FindFiles command via peoplecode, but this does the trick.

Wednesday, April 4, 2018

PeopleSoft web service response namespace issue

Running PT 8.53.x, wrote a simple custom web service "Hello World". Service works fine but saw one issue where the namespace returned by the response message is not defined in the WSDL. So any other 3rd party solution like a .Net program when they consume and execute this service it fails to receive the response due to the unidentified namespace.

Request 


<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2003/03/addressing/" xmlns:xsd="http://www.w3.org/2001/XMLSchema/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance/">

  <soapenv:Header xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  </soapenv:Header>
  <soapenv:Body xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <HelloWorld xmlns="http://xmlns.oracle.com/Enterprise/Tools/schemas/HELLOWORLD.V1"></HelloWorld>
  </soapenv:Body>
</soapenv:Envelope>

Response via PeopleSoft SOAP message template


<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2003/03/addressing/" xmlns:xsd="http://www.w3.org/2001/XMLSchema/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance/">
  <soapenv:Header xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  </soapenv:Header>
  <soapenv:Body xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <HelloWorldResponse xmlns="http://xmlns.oracle.com/Enterprise/Tools/schemas/HELLOWORLDRESPONSE.V1">
      <HelloWorldResult>XYZ</HelloWorldResult>
    </HelloWorldResponse>
  </soapenv:Body>
</soapenv:Envelope>

Actual response via SOAPUI or Postman

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <soapenv:Body>
      <HelloWorldResponse xmlns="http://peoplesoft.com/HelloWorldResponseResponse">
         <HelloWorldResult>Success</HelloWorldResult>
      </HelloWorldResponse>
   </soapenv:Body>
</soapenv:Envelope>


As you see the the problem is the namespace value returned for the root tag "HelloWorldResponse".

<HelloWorldResponse xmlns="http://peoplesoft.com/HelloWorldResponseResponse">


PeopleSoft puts the default value (not sure where this is stored)
http://peoplesoft.com/HelloWorldResponseResponse
instead of what is defined in the WSDL which is 
http://xmlns.oracle.com/Enterprise/Tools/schemas/HELLOWORLDRESPONSE.V1

I have a very basic synchronous service operation, with a request and response message and the handler OnRequest peoplecode is doing all the work.

Following was my application package peoplecode initially.

&response = CreateMessage(Operation.HELLOWORLD_SVCOP, %IntBroker_Response);
&xmldata = "<?xml version='1.0'?><HelloWorldResponse/>";
&xmlresponsedoc = CreateXmlDoc(&xmldata);

&HWResponseNode = &xmlNode.AddElement("HelloWorldResult");
&HWResponseNode.NodeValue = "Success";
&response.SetXmlDoc(&xmlresponsedoc);
Return &response;


So when creating the XmlDoc I was not specifying any namespace and I was hoping PeopleSoft will add it based on the schema definition. 

Instead following is what I had to do to correct the issue. (only displaying the change, no change to rest of the code)

&xmlresponsedoc = CreateXmlDoc("");

&docTypeNode = &xmlresponsedoc.CreateDocumentType("", "", "");
&rootNode = &xmlresponsedoc.CreateDocumentElement("HelloWorldResponse", "http://xmlns.oracle.com/Enterprise/Tools/schemas/HELLOWORLDRESSPONSE.V1", &docTypeNode);

&xmlNode = &xmlresponsedoc.DocumentElement;

So I have to use the CreateDocumentElement method of CreateXmlDoc in order to define the namespace. In order to use this method I had to first initialize the &docTypeNode variable by using the CreateDocumentType method.

After making the above change the response comes back correctly.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <soapenv:Body>
      <HelloWorldResponse xmlns="http://xmlns.oracle.com/Enterprise/Tools/schemas/HELLOWORLDRESSPONSE.V1">
         <HelloWorldResult>Success</HelloWorldResult>
      </HelloWorldResponse>
   </soapenv:Body>
</soapenv:Envelope>

Monday, March 12, 2018

Application Engine Parallel Processing (batch)

Parallel processing comes in picture when there is a requirement to process large amount of rows without compromising the performance which might otherwise get impacted greatly with non-parallel based processing. Sharing this concept with an actual example where this concept has been used.

I had written an application engine program which read data via a SQL object, performed some computations and finally wrote the result to an extract file which was sFTP'ed to the vendor. Pretty much run of the mill stuff. The AE was reading from PS_JOB, PS_PERSONAL_DATA, PS_EMAIL_ADDRESSES tables etc via a single single SQL object in a peoplecode step, performing the data manipulation as required and writing to a file. It was sequentially selecting an employee at a time, performing the data manipulations and writing to the file. To process around 3K employees it was taking a little over an hour, so not a lot of data but too much time to process this data-set.

So to solve this issue of performance, this is what I did. 
  1. I created a single TAO or temporary table. The fields in this table are essentially the unique list of fields or values that the single SQL object was returning. Did not build the TAO table yet, will do it in a later step.
  2. The TAO table has two keys, PROCESS_INSTANCE and EMPLID.
  3. Then before the peoplecode step was called added a new step to select from the various HR tables and write to this newly created TAO table. 
  4. SQL step looks something like this.
         INSERT INTO %Table(N_MY_TAO)          SELECT %Bind(PROCESS_INSTANCE)         , B.EMPLID         .....        .....
        .....
        FROM PS_JOB A, PS_PERSONAL_DATA B .....
        WHERE ....

    5. After this added a new step to update statistics on the newly populated TAO table.
        %UpdateStats(N_MY_TAO)
    6. In the Peoplecode step replaced the SQL fetch as follows.
        &EESQL = CreateSQL(FetchSQL(SQL.N_MY_SQL), N_MY_AET.PROCESS_INSTANCE);
    7. I had already defined a state record in my AE which has process_instance field in it, so did not have to do anything different with the state record. In the N_MY_SQL SQL object I am selecting all fields FROM %Table(N_MY_TAO) WHERE PROCESS_INSTANCE = :1 
ORDER BY EMPLID. I have some SQL case statements and formatting rules defined in the SQL itself.
8. Added the newly created TAO table under temp tables program properties and provided an instance count of 1. Instance count has to be 1 or more, if its set to 0 then you will see the following message in your AE log and there won't be much of performance improvement. 

WARNING: NO DEDICATED INSTANCES AVAILABLE FOR N_MY_TAO - USING BASE TABLE. (108,544)

9. Under PeopleTools > Utilities > Administration > PeopleTools Options, in my case the Temp Table Instances (Total) and Temp Table Instances (Online) is set to 3. So when I add 1 as the instance count in my AE and then build the TAO table, it will create PS_N_MY_TAO, PS_N_MY_TAO1, 2, 3 and 4 and when the process runs it will use PS_N_MY_TAO4 as the first 3 are used for online processing. 

10. You can add a step at the beginning or end to purge the TAO table like %TruncateTable (%Table(N_MY_TAO)) or better yet use the check-box under program properties "Use Delete for Truncate Table". With this option the TAO table is purged at the beginning of each run.

Addition of a single TAO table enabled the process to complete in 7 mins. Massive improvement.

Friday, March 17, 2017

Native sFTP transmission in PeopleSoft using RSA Keys

I have been using PeopleSoft's URL definition to setup sftp based transmission based on password authentication successfully. Recently was challenged to use sFTP but with RSA keys. Seemed simple at first but then as I dug deeper found out that there are a lot of nuances that one needs to be aware off. So here is what I learnt so far. I am using PT 8.53.26 for this exercise.

At a high level the steps are
1. Create a RSA public/private key pair. I am using PuTTY generator but there are a lot of other tools out there which do the same.
2. Create a URL definition in PeopleSoft and associate the RSA keys with the same.

So lets look at creating the RSA key pair first.


Launch the PuTTY Key Generator and select SSH-2 RSA (default) and then click on the Generate button. Once complete the public key, key fingerprint and key comment values are displayed. Key comment can be updated to some other value or can be left as-is. Put in a key passphrase.
Save the keys by clicking on the "save public key" and "save private key" button respectively.
Use .pub extension for the  public key and .ppk for the private key. Filename can be anything for your choice. So for this discussion we will call this putty-gen.pub and putty-gen.ppk.

The highlighted text in blue in the above picture is the public key in OpenSSH format so copy that and then paste it to a text file. We will name this file sftpuser. No extension just sftpuser. I will explain shortly the reason behind this naming convention.

Provide putty-gen.pub or sftpuser file to the vendor so that they can add it to their sftp server.
Once vendor sets an account for you they will provide the sftp address, port is generally 22 and an userid. This userid should match the filename, so we will consider that the userid created by the external vendor is called sftpuser. Hence the naming convention. When you send the public key to the vendor you wouldn't know the userid so at that point the file can be called anything, the filename is more critical to the PeopleSoft setup than the vendor's setup.

Via the PuTTY gen tool navigate to Conversions > export OpenSSH Key and export the private key to a text file. Name this file sftpuser.ppk

You can always go back in time load the privatekey "putty-gen.ppk" file in PuTTY Generator and update the private key password.

While the vendor is doing their setup and configuration we have following two options in PeopleSoft.
The first one and the preferred one is to import the keys in the Digital Certificates component as this options stores them in the database. The second option is to store the keys on the file server. We will discuss both the options here.

Digital Certificate Option (preferred option)

Navigate to PeopleTools > Security > Security Objects > Digital Certificates and add a new row. Select Type as SSH, provide some value for Alias and then a "Copy", hyperlink should show up. Click on that link and then paste the contents of sftpuser.ppk and sftpuser in the appropriate boxes and click on Save (which is to the bottom-right of the page).

Next navigate to PeopleTools > Utilities > Administration > URLs and create a new URL.
Add in the sftp URL provided by the vendor. It could be something like
sftp://vendorname.com
or  sftp://vendorname.com/~/drop-directory
or sftp://vendorname.com/drop-directory

The ~ indicates that a successful logon takes you to the home directory and from there you can navigate to lets say a sub-directory called as "drop-directory" where you have to drop the files.

Then click on the URL Properties link
Provide the following properties
AUTHTYPE = 1 - PUBLICKEY 
SSHKEYALIAS = alias set on the digital certificates component ( you can do a lookup here and pick the correct value)
USER = sftpuser (userid provided by the vendor)
PASSWORDKEY = private key passphrase (in plain text not encrypted) 

Option 2 – Keys stored on file system

We have to create some directories under the PS_SERVDIR location. So if the call to transmit the files is going to be made from the application server then in case of de-coupled PS_HOME it would be PS_CFG_HOME\appserv\{Domain} and process scheduler it would be PS_CFG_HOME\appserv\prcs\{Domain} and in case of a non de-coupled PS_HOME like the older way (traditional) it would be PS_HOME\appserv\{Domain} and PS_HOME\prcs\appserv\{Domain}

If you do not want to create sub-directories under here then there is some more information in Peoplebooks to create a new environment variable to point to a new location. 

So under PS_SERVDIR create a directory called sshkeys and 2 sub-directories under that called as private and public. The names of these 3 directories cannot change. Copy the sftpuser.ppk file to the private directory and sftpuser file to public directory. The filename naming convention is primarily enforced for this option. 

Now back to within PIA create a new URL definition. The URL properties is what is slightly different from the previous option. 
Provide the following properties
AUTHTYPE = 1 - PUBLICKEY 
PRIVATEKEY = sftpuser.ppk 
PUBLICKEY = sftpuser
USER = sftpuser (userid provided by the vendor)
PASSWORDKEY = private key passphrase (in plain text not encrypted) 

So that's essentially it. 

Following is the basic peoplecode call to transmit the file. 

&returncode = PutAttachment(URL.N_VENDOR, &filename, &fullpathtofile);

where N_VENDOR is the URL definition created above
&filename is the filename as it test.txt
and &fullpathtofile is the complete path as in C:\temp\test.txt


Sunday, December 11, 2016

JMS Connectors and Queues

Goal of this exercise is to design a web service solution wherein we are hosting the service and it’s consumed by a SaaS solution. In this POC I am exposing/hosting a simple SOAP based XML web service. 
I am testing this out on PeopleSoft HCM 9.1 running PT 8.53.26.

Transaction data flow is as follows:
  • Expose web service which takes in the request message
  • Request XML message is written to the request queue on the JMS server
  • A listener process reads the request queue and fetches the message
  • A process/program processes the request message and populates the result or response XML message on the response queue.
  • The web service reads the response queue and sends the response back to the caller
JMS Correlation ID is used as a key to fetch/write the appropriate message from/to the queue.

JMS Server and Queue Setup

Following are the steps to setup a JMS Server, Connection Factory and three queues, Request queue, Response queue and Error queue via the weblogic admin console. I am running Weblogic version 10.3.6.0.13.

Open the Weblogic Console and navigate to Services > Messaging > JMS Modules, Click Lock&Edit, then New, then enter a name for the JMS Module and click "Next”:






























Select PIA as Target and click "Next": 
Uncheck "Would you like to add resources to this JMS system module?" and click "Finish"








Navigate to Services > Messaging > JMS Servers and Click "new", enter the name of the JMS Server and click "Next":













Navigate to Services > Messaging > JMS Modules and select the JMS Module created in previous steps.














Now create a queue as below. 
















Similarly create two more queues, the ResponseQueue and ErrorQ












PeopleSoft setup

Nodes:
Create two nodes, of type external, one for listening on the request queue and another one to write back to the response queue. Setup is pretty much identical except for the queue name. Use the JMSTARGET connector ID for both the nodes and key in properties as follows:
JMSFactory = MyConnectionFactory (this is the JNDI name defined via the weblogic console)
JMSMessageType = Text
JMSProvider = Weblogic
JMSQueue = RequestQueue for one node and ResponseQueue for the other node
JMSUrl = Weblogic JMS Server in the form t3://server:port

Messages:

Create two non-rowset based messages, again one for request and the other for response.
Added a schema to each message (this is optional, you can manage it via peoplecode too). Request message for this POC has the following structure:

<?xml version='1.0'?>
<person>
    <emplid>string</emplid>
</person>

and response message would be as follows:

<?xml version='1.0'?>
<person>
    <name>string</name>

</person>

Service Operation:

Created a service and assigned two service operations to it. Assigned security to both the service operations. The listener service operation has handler application package onNotify peoplecode defined, nothing for the response service operation. 
So in case of the listening service operation the routing would be:
Sender Node = newly created JMS listening node (above)
Receiver Node = local default node 

In case of response service operation the routing would be:
Sender Node = local default node
Receiver Node =  newly created JMS response node (above)

Handler Peoplecode:

method onNotify
   /+ &MSG as Message +/
   /+ Extends/implements PS_PT:Integration:INotificationHandler.OnNotify +/
  
   Local XmlDoc &xmldoc = &MSG.GetXmlDoc();
   Local XmlNode &xmlNode = &xmldoc.DocumentElement;
   Local string &emplid = %This.FetchDataValue(&xmldoc, 1, "emplid");
   Local string &name;
  
   SQLExec("select ISNULL((SELECT NAME FROM PS_PERSONAL_DATA WHERE EMPLID = :1),'No Name')", &emplid, &name);
  
/* create the reponse message */
Local Message &ResponseMSG = CreateMessage(Operation.JMS_RESP_SO, %IntBroker_Request);
   Local string &xmldata = "<?xml version='1.0'?><person/>";
   Local XmlDoc &ResponseDOC = CreateXmlDoc(&xmldata);
   &xmlNode = &ResponseDOC.DocumentElement;
  
   Local XmlNode &NameNode = &xmlNode.AddElement("name");
   &NameNode.NodeValue = &name;
  
/* create the reponse message */
   &ResponseMSG.SetXmlDoc(&ResponseDOC);
   %IntBroker.Publish(&ResponseMSG);
  
end-method;

method FetchDataValue
   /+ &xmldoc as XmlDoc, +/
   /+ &row as Number, +/
   /+ &dataTag as String +/
   /+ Returns String +/
  
   Local string &dataValue;
   Local array of XmlNode &dataList;
  
   &dataList = &xmldoc.DocumentElement.FindNodes(&dataTag);
   If &dataList.Len = 0 Then
      &dataValue = "";
   Else
      &dataValue = &dataList [&row].NodeValue;
   End-If;

   Return &dataValue;

end-method;

IB changes:

Integrationgateway.properties file can be edited via the gateway setup properties link from the gateway component. Updated that as follows for the local gateway.

Showing only the pieces that I changed updated, under the JMS configuration section. need to only define the listening queue here. Error queue is optional but good to have. 

ig.jms.Queues=1

ig.jms.Queue1=RequestQueue
ig.jms.Queue1.Provider=Weblogic
ig.jms.Queue1.JMSFactory=MyConnectionFactory
ig.jms.Queue1.Url=t3://server:port
# following service operation details are required only for a listening operation.  
ig.jms.Queue1.OperationName=JMS_REQ_SO.v1
ig.jms.Queue1.OperationVersion=v1
ig.jms.Queue1.RequestingNode=JMS_LISTENER_NODE
ig.jms.Queue1.DestinationNode=PSFT_HR


ig.jms.ErrorQueue=ErrorQueue
ig.jms.ErrorQueue-Provider=Weblogic
ig.jms.ErrorQueue-JMSFactory=MyConnectionFactory
ig.jms.ErrorQueue-Url=t3://server:port 


Testing:


Create a message via the weblogic console in MyQueue1 or the RequestQueue and verify via integration broker monitor that the message is picked up processed and written to MyQueue2 or the ResponseQueue.

Any issues or errors will be written to errorLog.html and/or msgLog.html files on the webserver.