Quantcast
Channel: ATeam Chronicles
Viewing all 33 articles
Browse latest View live

Integrating with Oracle Sales Cloud using SOAP web services and REST APIs (Part 1)

$
0
0

Sales Cloud provides several types of interfaces to facilitate integration with other applications within your enterprise or on the cloud, such as SOAP web services, REST APIs, Events, file-loaders, and BI Reports. The focus of this blog series is SOAP web services and REST APIs.

Sales Cloud provides SOAP web services and REST APIs for other applications to operate on core Sales Cloud functions. Sales Cloud also provides an extension framework that allows outbound invocation to other SOAP services within your on-premises systems or on the cloud.

In this blog series, I will cover the following topics briefly.

1.  Invoking Sales Cloud SOAP web services

a. Identifying the Sales Cloud SOAP web service to be invoked
b. Sample invocation of a web service
c. Sample invocation from an ADF application

2.  Invoking external SOAP Web Services from Sales Cloud (covered in Part 2)

3.  Invoking Sales Cloud REST APIs from other external applications (covered in Part 3)

4.  Invoking external REST APIs from Sales Cloud (currently unavailable but will soon be a feature )

1. Invoking Sales Cloud SOAP web services from external applications

There are two main types of services that Sales Cloud exposes

–          ADF Services – These services allow you to perform CRUD operations on Sales Cloud business objects. For example, Sales Party Service, Opportunity Service etc. Using these services you can typically perform operations such as get, find, create, delete, update etc on Sales Cloud objects.These services are typically useful for UI driven integrations such as looking up Sales Cloud information from external application UIs, using third party Interfaces to create/update data in Sales Cloud. They are also used in non-UI driven integration uses cases such as initial upload of business or setup data, synchronizing data with an external systems, etc.

–          Composite Services – These services involve more logic than CRUD and often involving human workflows, rules etc. These services perform a business function such as Orchestration Order Service and are used when building larger process based integration with external systems.These services are usually asynchronous in nature and are not typically used for UI integration patterns. These services will not be the focus of this blog

1a. Identifying the SOAP web service to be invoked

All Sales Cloud web service metadata is available through Developer Connect available within your Sales Cloud Instance. Note: HCM, ERP, and SCM web services are also available in the same location. To navigate to Developer Connect, log into your Sales Cloud instance > Navigator (the hamburger icon) > Tools > Developer Connect

DeveloperConnectNav1

Once you connect, you will be able to search any service that you are interested in. Before you proceed, click the Synchronize button to ensure that you have the latest definitions. It may take a few seconds for the Synchronization to complete. In the example below, I’m searching for the Opportunity Service. You can directly view the WSDL if you would like using the WSDL download link.

DevConnect2

You can drill down into the service and look for additional information about the service, such as the endpoint, security policies, operations, payload structures, and even sample code for invocation.

 

DevConnect3

 

DevConnect4

 

Note: The SOAP webservices can be used part of data integration or UI integration requirements.
When using Oracle Integration Cloud Service or Oracle SOA cloud Service or when using any data integration platform, these SOAP web services can be utilized to query and edit data in Sales Cloud.
When using Java Cloud Service, you can build an Oracle ADF application which uses the SOAP service to display and modify data on a custom built UI

1b. Sample invocation of a web service

Take a closer look at the WSDL or the details tabs in the developer connect. Couple of items to notice here

–          Most operations have synchronous as well as asynchronous variations. This blog will focus on synchronous only. For asynchronous, please refer – http://www.ateam-oracle.com/using-soapui-for-secure-asynchronous-web-service-invocations-in-fusion-applications.

–          Notice the security policy entry under binding. The synchronous operations are associated usually with wss11_saml_or_username_token_with_message_protection_service_policy

image003

The get operation has a simpler request payload since it typically uses only an ID column. The create and update are larger payloads but straightforward to execute. The focus of this blog will be the “Find” Operation, which is equivalent to an “Advanced Search” that you typically find in UIs.

The Find operation allows you to find one more entities based on a given combination of search criteria. Additionally, Find operations allow you to filter the resulting fields that you want to see. Imagine that you want to build a simple search feature in your custom application where you want the search results to display a valid set of Sales Accounts – as defined in Sales Cloud. You may want the results to display as a list of Accounts (organizations), with information just four key pieces of information – Organization Name, Address, City and Country.

image005If the service invocation with find operation results in providing ALL details for each Sales Party, then it makes it hard for the custom application to handle the output. There is an unnecessary performance overhead as well. Fortunately, the Find operation provides a set of parameters in the input payload that can be used to control the output.

For example, you can search for all customers of type organizations using the query below. The above request translates to – give me the top 10 hits for Organizations whose names start with “Art” and give me only the Name, Address, City and Country details of these Organizations.

Request:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <typ:findSalesParty xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/salesParties/salesPartiesService/types/">
         <typ:findCriteria xmlns:typ1="http://xmlns.oracle.com/adf/svc/types/">
            <typ1:fetchStart>0</typ1:fetchStart>
            <typ1:fetchSize>10</typ1:fetchSize>
            <typ1:filter>
               <typ1:group>
               <typ1:conjunction>And</typ1:conjunction>
                  <typ1:item>
                     <typ1:attribute>PartyName</typ1:attribute>
                     <typ1:operator>STARTSWITH</typ1:operator>
                     <typ1:value>Art</typ1:value>
                  </typ1:item>
                  <typ1:item>
                     <typ1:attribute>PartyType</typ1:attribute>
                     <typ1:operator>=</typ1:operator>
                     <typ1:value>ORGANIZATION</typ1:value>
                  </typ1:item>	
               </typ1:group>
            </typ1:filter>
            <typ1:findAttribute>PartyName</typ1:findAttribute>
            <typ1:findAttribute>OrganizationParty</typ1:findAttribute>
            <typ1:childFindCriteria>
            		<typ1:childAttrName>OrganizationParty</typ1:childAttrName>
            		<typ1:findAttribute>Address1</typ1:findAttribute>
            		<typ1:findAttribute>City</typ1:findAttribute>
            		<typ1:findAttribute>Country</typ1:findAttribute>
            </typ1:childFindCriteria>
         </typ:findCriteria>
      </typ:findSalesParty>
   </soapenv:Body>
</soapenv:Envelope>

When executed against my Sales Cloud instance, the response displays something like below

Response:

<env:Envelope ..>
   <env:Header>..</env:Header>
   <env:Body>
      <ns0:findSalesPartyResponse ..">
         <ns2:result ..>
            <ns1:PartyName>Artemis International Solutions Corp</ns1:PartyName>
            <ns1:OrganizationParty ..>
               <ns3:Address1>1000 Louisiana</ns3:Address1>
               <ns3:Country>US</ns3:Country>
               <ns3:City>Houston</ns3:City>
            </ns1:OrganizationParty>
         </ns2:result>
         <ns2:result ..>
            <ns1:PartyName>Artisan Press Ltd</ns1:PartyName>
            <ns1:OrganizationParty ..>
               <ns3:Address1>4  BOSTON ROAD</ns3:Address1>
               <ns3:Country>GB</ns3:Country>
               <ns3:City>Leeds</ns3:City>
            </ns1:OrganizationParty>
         </ns2:result>
         <ns2:result ..>
            <ns1:PartyName>Artwise Messe</ns1:PartyName>
            <ns1:OrganizationParty..>
               <ns3:Address1>Bergengrünstr. 9</ns3:Address1>
               <ns3:Country>DE</ns3:Country>
               <ns3:City>Essen</ns3:City>
            </ns1:OrganizationParty>
         </ns2:result>
      </ns0:findSalesPartyResponse>
   </env:Body>
</env:Envelope>

Note that the response is much smaller and manageable. They key here is to use the findAttribute to control the output. In my system, the response time for this request without the findAttribute was 2000 to 2500ms. With the findAttribute the response time improved to 500ms.

In general, you can play with the findCriteria to exactly define what you want to search and what you want in the output. Think of it as the web service equivalent of the find functionality that you see in many enterprise applications. This is a powerful feature and is present only for the Find Operation.

Another key point to note about web services is that they encapsulate details originating from more than one individual object in Sales Cloud. For example in the case of SalesParty, in addition to the basic details of the Sales Party, the service provides dependent information from OrganizationParty and SalesAccount objects. Each of these objects again bring in dependencies that they need.

This allows the consumers of the webservice, to get all relavant information in one go, without having to make multiple calls to retrieve related information. This makes ADF services, granular and right-sized business services, which is a corner store to building robust web services based integration architectures.

1c. Sample invocation from an ADF application

Let us see some techniques when invoking Sales Cloud services from a custom built ADF application. This is useful when you have standalone ADF based application in your organization, or if you are building ADF extensions to Sales Cloud, say on Oracle Java Cloud Service.

Some parts of these instructions can be useful when invoking Sales Cloud web services from any J2EE based application.

There are several ways to invoke Sales Cloud web services from ADF. We will look at two simple options

–          Using ADF Web Service Data Control
–          Using Web Service Proxy

ADF Web Service data control is a simple and declarative approach to invoke a web service from ADF pages. The web service data control is particularly useful when the end objective is to simply display the results of a web service invocation on an ADF Page.

The Web Service Proxy option provides more flexibility and control, and can be used in conjunction with a programmatic VO or a bean data control.

The blog entry https://blogs.oracle.com/jdevotnharvest/entry/which_option_to_choose_for provides guidelines to pick a suitable approach when accessing web services in ADF.

If you have JDeveloper 11g installed in your system, you can simply follow the steps below to invoke your Sales Cloud on premise or SaaS instance.

Using ADF Web Service Data Control

  • Create an Application of type Fusion Web Application. I named mine ‘SalesPartySample’. Ensure that you choose and shuttle ‘Web Services’ when on the Model project creation page. Accept all other defaults in the Wizard

image009

  • Under the Model Project, create a web service data control and provide the WSDL for the SalesParty web service. Select the getSalesParty Operation. Note that you can also include custom HTTP Headers. We won’t be using it in this example but it is one way of authenticating against an Sales Cloud service

image013image015

  • In the Endpoint authentication, provide the username/password

image017

  • At this point you should see something like this in the data controls. Notice the Parameters and results

image019

  • Now we will use this data control from a simple UI. In the View Control Project, create a new JSF page. I created mine as index.jspx
  • Drag and drop the partyId parameter from the Data Control into the jspx. Choose the display type as Text Input w/ Label. Underneath it, drag and drop the getSalesParty(Long). Choose to display it as an ADF button. Underneath it, drag and drop the PartyName as Text Output. Drag and drop Address1, City and Country from under Organization Party. Your jspx should look like below

image021

  • Simply right click your .jspx and Run.
  • In this page, plug in a Party ID. I entered the Party ID for ‘Artisan Press Ltd’ that I got from the previous SOAP UI exercise

image023

  • It’s that simple! No coding effort!! Of course the UI can be made much better looking and there could be more complex usecases such as using custom HTTP Headers and this will require some amount of coding.
  • At this point if you face a SSL related error, it is because your Jdev keystore doenst have the necessary SSL certificates imported.To fix this, navigate to your Sales Cloud instance >Export the SSL certificate as .pem using your favorite browser (plenty of instruction on the internet). Import it to your JDev keystore as follows

C:\Oracle\Middleware\jdk160_24\bin>keytool -importcert -alias fusionapps -file <locationtotheexportedPEM>\mypk.pem -trustcacerts -keystore C:\Oracle\Middleware\wlserver_10.3\server\lib\DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase

Using Web Service Proxy

Next we will look at using a Web Service Proxy to invoke the Sales Party Service. This time we will not use the get operation but use the find operation. We will build the FindCriteria that we used in SOAPUI using Java.

  • Create a new Application and Project that will house your proxy exclusively (I will explain later why this makes sense)
  • Click New and select Web Service Proxy

image027

  • Choose JAX WS style
  • Enter the package name as oracle.sample.salesparty.proxy and oracle.sample.salesparty.proxy.types
  • Unselect generate as async and subsequently select Don’t Generate Async

image029

  • You should see SalesPartyServiceSoapHttpPortClient.java with the text, “Add your code to call the desired methods”
  • Best Practice: It is a good practice to not use the above Proxy Client directly. This is because the when the proxy gets regenerated the changes made to the above client changes will be be lost. Instead it is good to create a separate facade. The facade also allows you to control the input and output fields you would like to work with, instead of working with the entire payload. For example in our example we would like to only take in the input of ‘startswith’
  • Best Practice: Like I mentioned earlier, leave your proxy in its project. In fact you will treat as a standalone deployable entity. The facade and the rest of the application components such as UI, Data controls etc will constitute the main application which will include the proxy application as a dependency. This allows for distributed development as the same proxy can be used by multiple applications. This reduces redundancy and also ensure that if the proxy is regenerated, then all applications pull the latest code.
  • Create a facade like below. The idea is to build the same XML request payload using ‘findCriteria’ like we built in the earlier example. The getSalesPartyList method takes only “startsWith” as input from the user, builds the findcriteria and executes the service, returning a list of Sales Party records
public class SalesPartyFacade {
          private SalesPartyService_Service salesPartyService_Service;
          public List<SalesParty> getSalesPartyList(String startsWith)
          throws ServiceException
        {
          List<SalesParty> SalesParties;
          FindCriteria findCriteria = createFindCriteria(startsWith);
 
          SecurityPolicyFeature[] securityFeatures =
          new SecurityPolicyFeature[] { new SecurityPolicyFeature("oracle/wss_username_token_over_ssl_client_policy") };
          
          // salesPartyService_Service = new SalesPartyService_Service();
           try
           {
               salesPartyService_Service = new SalesPartyService_Service(new URL("https://your_sales_cloud_URL/crmCommonSalesParties/SalesPartyService?WSDL"),new QName("http://xmlns.oracle.com/apps/crmCommon/salesParties/salesPartiesService/","SalesPartyService") );
           } catch (MalformedURLException e) { 
               e.printStackTrace(); 
           }  
          SalesPartyService salesPartyService = salesPartyService_Service.getSalesPartyServiceSoapHttpPort(securityFeatures);

          
          WSBindingProvider wsbp = (WSBindingProvider)salesPartyService;
          wsbp.getRequestContext().put(BindingProvider.USERNAME_PROPERTY,"User1");
          wsbp.getRequestContext().put(BindingProvider.PASSWORD_PROPERTY,"Passwd1");

          FindSalesParty fSalesParty= new FindSalesParty();
          fSalesParty.setFindCriteria(findCriteria);
          SalesParties = salesPartyService.findSalesParty(fSalesParty).getResult();

          return SalesParties;
        }

    private static FindCriteria createFindCriteria(String startsWith)
    {
      FindCriteria findCriteria = new FindCriteria();
      ChildFindCriteria childFindCriteria = new ChildFindCriteria();
      findCriteria.setFetchStart(0);
      findCriteria.setFetchSize(10);

      ViewCriteria filter = new ViewCriteria();
      ViewCriteriaRow group1 = new ViewCriteriaRow();
      ViewCriteriaItem item1 = new ViewCriteriaItem();
      item1.setAttribute("PartyName");
      item1.setOperator("STARTSWITH");
      item1.getValue().add(startsWith);
      
      ViewCriteriaItem item2 = new ViewCriteriaItem();
      item2.setAttribute("PartyType");
      item2.setOperator("=");
      item2.getValue().add("ORGANIZATION");
      

      group1.getItem().add(item1);
      group1.getItem().add(item2);
      group1.setConjunction(Conjunction.AND);
      
      filter.getGroup().add(group1);
      findCriteria.setFilter(filter);
      
      findCriteria.getFindAttribute().add("PartyName");
      findCriteria.getFindAttribute().add("OrganizationParty");
     
/*    childFindCriteria.setChildAttrName("OrganizationParty");
      childFindCriteria.getFindAttribute().add("Address1");
      childFindCriteria.getFindAttribute().add("City");
      childFindCriteria.getFindAttribute().add("Country");
      findCriteria.getChildFindCriteria().add(childFindCriteria);
*/  
      return findCriteria;
    }

}
  • Best Practice: As shown in the code, you can use the WSBindingProvider to override the username password. This is especially useful when multiple projects are dependent on the same Proxy. Each project can then override the security policy in its facade to suit its needs.
  • Best Practice: When instantiating the proxy in the facade, instead of using the constructor with no arguments (which I intentionally commented out), use the constructor that accepts the WSDL. You will provide the WSDL of the SalesParty service in your Sales Cloud env here (although you already used the WSDL when building the proxy using the wizard). Reason for doing this again is as follows. At design time, you would’ve used a WSDL from a specific environment. However this proxy code can be ported to a different environment. At that time, it is not required for you to re-generate the proxy if you followed this approach. By simply providing the WSDL everytime in the facade, you are ensuring that your proxy is using the latest WSDL definitions. As an alternative approach, sometimes you see people overriding the endpoint alone using the ENDPOINT_ADDRESS_PROPERTY. However this alternative approach (of not overriding the WSDL but simply overriding the endpoint) has two pitfalls – the old WSDL may become unavailable and the old WSDL could have security related configuration that is incorrect for the new endpoint causing security issues.
  • Best Practice: You can also externalize the new WSDL URL by creating it as an entry in a property file and have the proxy reference it. Since you need to modify the property file only, you do not need access to the proxy source code. You just need the .ear of the file being deployed. The other advantage of this approach is that multiple proxies can leverage the same property in the property file allowing you to quickly modify applications during env migration. You can also use scripting to automate the end point changes in the property files to assist deployment automation.
  • Now that you have the facade, you can write a simple test class to test this facade.
        String filter = "Art"; 
        SalesPartyFacade spf = new SalesPartyFacade();
        List<SalesParty> salesparties = spf.getSalesPartyList(filter);
        for (SalesParty sp: salesparties)
        {System.out.print("Party Name = " + sp.getPartyName().getValue()+"\n");
         System.out.print("Address1 = " + sp.getOrganizationParty().get(0).getAddress1().getValue()+"\n");
         System.out.print("City = " + sp.getOrganizationParty().get(0).getCity().getValue()+"\n");
         System.out.print("Country = " + sp.getOrganizationParty().get(0).getCountry().getValue()+"\n");
         System.out.println("\n\n");   }
  • This facade can now be used in any manner to invoke the Sales Party Service from your UI.

This concludes part 1 of this two part blog. In part 2 of this post I’ll be talking about invoking external SOAP and REST services from Fusion Applications.

 

 

 


Developer-controlled BI Cache Settings (On-Premise and SaaS)

$
0
0

Introduction

As an Oracle Business Intelligence (BI) Developer, you may not always have direct access to the BI Common Semantic Model (RPD) or the underlining Operating System. This can make it difficult or near impossible to change the global cache settings. That said, as a developer, you can still change the cache settings at the analysis request level. This can be done from the Advanced tab of the analysis request.

Main Article

If your dashboard is not refreshing and you feel the problem may be cache related, below are four suggestions that may help to resolve your cache issue. It is recommended to perform these changes individually, and in the order below.  Performing each step independently will assist in identifying where the cache is occurring. This may be beneficial if you are experiencing caching on multiple Dashboards and need to repeat the solution.

1) Check Bypass Oracle BI Presentation Services Cache

2) Change Partial Update Settings

3) Add an Advanced SQL Clause

4) Clear All Presentation Services Cache

The article below outlines these steps in further detail.

1. Check Bypass Oracle BI Presentation Services Cache

When users access Answers to run queries, the Presentation Services caches the results of the queries. Presentation Services use the request key and the logical SQL string to determine if subsequent queries will use cached results. If the cache can be shared, then subsequent queries are not stored.

There may be instances where you want to bypass the Presentation Services Cache and force the analysis request to run from scratch. To do this, begin by checking the “Bypass Oracle BI Presentation Services Cache” checkbox from the Advanced tab of the analysis request. To apply this setting, you must click the “Apply SQL” button at the bottom of the tab and then click Save.

1

2

 

2. Change Partial Update Settings

Partial Update defaults to Affected Views, which means the SQL and HTML code are only updated by an event such as drilling or sorting. The advantage of having Partial Update set to Affect Views is that performance may be enhanced since not all views are redrawn each time. That said, at times this may be undesirable; especially if you are noticing some views with stale or incomplete data. If you are experiencing this, changing the dropdown box to “Entire Report” will force all views to refresh every time the analysis request is accessed. To apply this setting, you must click the “Apply SQL” button at the bottom of the tab and then click Save.

3

2

3. Add an Advanced SQL Clause

If checking “Bypass Oracle BI Presentation Services Cache” and changing “Partial Update” to “Entire Report” does not resolve your refresh issue, you may need to bypass the BI Server Cache. The BI Server saves the results of the query in files and reuses the results when a similar query is requested. Therefore, if the SQL has not changed, Oracle BI will choose not to re-run the query and the results may not reflect the most recent ETL updates. To Bypass the BI Server Cache, type “SET VARIABLE DISABLE_CACHE_HIT=1” in the Prefix section of the “Advanced SQL Clause”, click the “Apply SQL” button at the bottom of the tab and then click Save.

SET VARIABLE   DISABLE_CACHE_HIT=1;

4

 

4. Clear All Presentation Services Cache

If you are a developer with Administration privileges, you can also try clearing all Presentation Server Cache. This is accessed through the Administration Link (top right from Analytics) and then clicking on Manage Sessions. Once in the “Cursor Cache” section, click on “Close All Cursors”. This will clear all Presentation Server Cache. It is important to note that when clicking on “Close All Cursors” you are affecting all current active user sessions. This may have an adverse effect on reports or dashboards that are relying on cached results for improved performance.

5

 

6

Further Reading

Managing Performance Tuning and Query Caching

Summary

This article describes four areas available in the Presentation to manage BI cache. The first three suggestions (below) affect the cache on a per analysis request basis and are changed on the Advanced tab.

1) Check Bypass Oracle BI Presentation Services Cache

2) Change Partial Update Settings

3) Add an Advanced SQL Clause

As with all code changes, it is advised that you make these changes in a test / development environment and that you archive the request before making the change. This is particularly important when making changes to the advance tab and clicking on the Apply SQL button. Once this is done, Oracle BI creates a new analysis based on the SQL statement that you have added or modified. Therefore, you lose all views and formatting you have previously created for the analysis. The XML code is also modified for the new analysis. Thus, it is always advisable to make sure you have a backup you can restore from.
The fourth suggestion, available only to those with Administrator privileges, clears allpresentation services cache for all open requests.

4) Clear All Presentation Services Cache

This option should be used as a last resort as it affects all current active user sessions and may decrease the performance of reports or dashboards relying on cached results.

 

Getting started with Taleo Connect Client (TCC)

$
0
0

Introduction

The Taleo set of Cloud Offerings such as Recruiting, Performance, Learn, Onboarding, etc. come with a powerful tool for configuring and managing batch-style integration requirements both for import as well as export scenarios.

This article is meant to provide some guidance for first steps in using the Taleo Connect Client (TCC) and not as a substitute of the product documentation.

In technical terms, the TCC acts as a frontend application that is deployed on any Windows or Unix based client system. It communicates with the Taleo backend via web services and takes away the burden of dealing with technical details of the backend services. Instead, the user can focus on the relevant business objects and the mapping of attributes to the files used for exporting and importing information from and to Taleo.

The integration process workflow is built in TCC and determines how to extract information from or upload information to the Taleo Cloud service. Specific editors are offered for each of these activities.

Main Article

Downloading and Installing the Software

The latest version of TCC is available in the Oracle Software Delivery Cloud. After the login we select ‘Taleo Products’ in the Product Pack dropdown list and Microsoft Windows (32bit) as Platform. In the list of results we then chose the latest Oracle Taleo Enterprise Edition entry (currently 14A) leading to the actual download location:

Screen Shot 2014-11-28 at 11.17.06

It important to download both the Connect Client as well as the corresponding Data Model package, also referred to as ‘Product Integration Package (PIP)’ in the product documentation.

Both packages come with the usual installer. The Data Model installer must run first as the Connect Client installer will ask for the installation folder of the Data Models.

Connecting to the Taleo Cloud Service

When we launch TCC for the first time we have to provide the connection details to the Taleo Cloud Service as well as the credentials for a user account that is enabled for the use of TCC:

Screen Shot 2014-11-28 at 11.18.47The entries here are straight-forward and will be persisted for the next time. Once we are done we click the Ping button to verify that the provided details were correct. Note that while this looks like a login action, it’s actually only verifying the connection along with the credentials and any further configuration is done in an offline mode. Only for the actual data exchanges TCC will connect with the Taleo Cloud Service.

Exporting Data

TCC provides a rich business model covering all entities in a particular Taleo product version. This can be leveraged to create specifications describing the information to be extracted. Each export is based on a root entity from which all fields and relationships are derived. From that entity, fields and relationships can be selected (projected) for extraction.

In a first attempt we want to go ahead and export Candidate records from the Taleo Recruiting Cloud. Exporting data with TCC requires an export definition as well as an export configuration. You can think of the export definition (the same applies to import definition later) as the specification of the data objects in the Taleo Cloud such as a Candidate, Application, EmployeeGoal, etc. along with the selection of attributes, filtering of data, and sorting of the result set.

Export Definition

In order to create an export configuration we select File -> New -> New Export Wizard from the menu. Next we choose the right product and version (Recruiting 14A in our case) as well as the main object to be exported (Candidate):

Screen Shot 2015-01-27 at 14.53.49

Clicking Finish brings us to the design perspective of TCC to further specify the export. The main information on the General tab is the Export mode which already defaults to CVS-entity. On the Projections tab we next specify the object attributes to be included in the export. TCC allows for simple drag and drop of either single attributes or entire structures from the right-hand entity view into the Projections area:

Screen Shot 2015-01-27 at 15.04.28In this example we simply pulled the whole Candidate structure into the Projects area. Note that this includes any attribute of the object, but not sub-structures such as Applications, etc. But further attributes from such sub-structures can be added in the same way if required.

Also note that the order in the Projections tab will later define the order of attributes in the exported file. It simply defaults to the alphabetical order, but we are free to adjust this as required using the Up and Down buttons.

In the Filters tab one can specify simple and complex filter criteria for the data to be exported. In our example we simply chose to filter for Candidates living in San Diego:Screen Shot 2015-01-27 at 15.12.30

Note that we can use the same drag & drop of attributes from the entity view on the right hand side into the field area that first defaults to ‘path’. Finally, the same mechanism also applies to the Sorting tab in which we specify to sort by LastName and FirstName in ascending order:

Screen Shot 2015-01-27 at 15.16.09

In a last step we save the completed export definition as we will need it in a bit when we continue with the export configuration.

Export Configuration

The export configuration complements the export definition with operational details of the export process. The export configuration defines the integration process workflow that sends the request to the Taleo Cloud service and later retrieves the response file asynchronously. It uses a T-SOAP message format containing information wrapped in an industry standard SOAP envelope along with Taleo specific information.

In order to create the configuration launch the wizard via File -> New -> New Configuration. In that wizard reference the export definition from above and confirm the endpoint of the Taleo Cloud Service.

Screen Shot 2014-11-28 at 11.29.12

You can now tweak operational specifications such as the location of the csv file with the exported data, the file name pattern, etc. Furthermore, there is the ability to specify email alert notifications for success and error cases when running the export. In our case the defaults fit for our requirements and we save the export configuration.

Running the Export

Kicking off the actual export is as simple as clicking the water wheel icon in the icon bar. With that TCC changes to the runtime perspective providing a detailed monitoring view on the execution step of the export:

Screen Shot 2014-11-28 at 11.30.10

It worth to mention that the entire processing in the Taleo Cloud is done asynchronously. You can see from the above that the processing request is prepared and then sent to the Taleo service. From there on TCC polls in regular intervals the Taleo Cloud to see if the processing has been completed and if the results can be retrieved in another subsequent step. The final step is then the extraction and conversion of the retrieved data into the expected format, i.e. into a csv file in our case.

In this case all steps were successful and the resulting csv file can be verified:

Screen Shot 2015-01-27 at 15.41.01

Note that the order of the attributes (taking LastName and FirstName to the top in the Projections tab) as well as the sorting specification had the expected result.

Importing Data

The data import process in TCC is very similar to the export process shown above in detail, hence we only focus on those aspect that differ.

Import Definition

The import definition wizard differs slightly from the export wizard since it asks for the object and also the operation that is to be executed on the object. In our case we select the create operation on the Candidate:

Screen Shot 2015-01-27 at 15.53.38

Note that there are also more complex operations such as merge, but for this example the creation of a new record in the system is sufficient. On the following screen we specify details for the csv file such as the delimiter, but more relevant the list of attributes expected in the import file:

Screen Shot 2015-01-27 at 16.49.29

Import Configuration

Equivalent to the export configuration we also need an import configuration for the operational details such as where to find the csv file with the records to import. This time we reference the import definition when going through the new configuration wizard resulting in the following:

Screen Shot 2015-01-27 at 16.54.24

We can leave everything else at the default values and save the import configuration.

Running the Import

Since we are aiming to import new candidate records we have to prepare a csv file that matches the previous specification regarding columns, delimiter, header row, etc. This time we just go with a single record and the basic candidate information we specified in the import definition:

Screen Shot 2015-01-27 at 16.53.22

The execution of the import can be monitored in the same way as before and it’s worth not only verifying the status of the several execution steps, but also to check the content of the response file indicating the outcome of the actual import:

"Index","Identifier","Status","TransactionType","Result","Message"
"1","","success","candidate.create",,""

Finally, we also want to verify in the Taleo Recruiting Cloud that we can find the imported candidate:

Screen Shot 2015-01-27 at 16.57.12Conclusion

Leveraging TCC makes both importing data into and exporting from the Taleo Cloud products a convenient and straight-forward process. TCC hides all technical details of the Taleo integration backend service as well as the details of the asynchronous processing and allows the user to focus on the actual data and the operation in a powerful client application.

 

Fusion HCM Cloud Bulk Integration Automation

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from cloud. The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL supports data migration for full HR, incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT). It also provides the ability to bulk load into configured flexfields. HCM Extracts is an outbound integration tool that let’s you choose data, gathers and archives it. This archived raw data is converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns. This post focuses on how to automate data file transfer with WebCenter Content to initiate the loader. The same APIs will be used to download data file from the WebCenter Content delivered through the extract process.

WebCenter Content replaces SSH File Transfer Protocol (SFTP) server in the cloud as a content repository in Fusion HCM starting with Release 7+. There are several ways of importing and exporting content to and from Fusion Applications such as:

  • Upload using “File Import and Export” UI from home page navigation: Navigator > Tools
  • Upload using WebCenter Content Document Transfer Utility
  • Upload programmatically via Java Code or Web Service API

This post provides an introduction, with working sample code, on how to programmatically export content from Fusion Applications to automate the outbound integration process to other applications in the cloud or on-premise. A Service Oriented Architecture (SOA) composite is implemented to demonstrate the concept.

Main Article

Fusion Applications Security in WebCenter Content

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.

Let’s review the inbound and outbound batch integration flows.

Inbound Flow

This is a typical Inbound FBL process flow:

 

HDL_loader_process

The data file is uploaded to WebCenter Content Server either using Fusion HCM UI or programmatically in /hcm/dataloader/import account. This uploaded file is registered by invoking the Loader Integration Service – http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService.

You must specify the following in the payload:

  • Content id of the file to be loaded
  • Business objects that you are loading
  • Batch name
  • Load type (FBL)
  • Imported file to be loaded automatically

Fusion Applications UI also allows the end user to register and initiate the data load process.

 

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

Steps to Implement PGP

  1. 1. Provide your PGP Public Key
  2. 2. Oracle’s Cloud Operations team provides you with the Fusion PGP Public Key.

Steps to Implement PGP X.509

  1. 1. Self signed fusion key pair (default option)
    • You provide the public X.509 certificate
  2. 2. Fusion Key Pair provided by you:
    • Public X.509 certificate uploaded via Oracle Support Service Request (SR)
    • Fusion Key Pair for Fusion’s X.509 certificate in a Keystore with Keystore password.

Steps for Certificate Authority (CA) signed Fusion certificate

      1. Obtain Certificate Authority (CA) signed Fusion certificate
      2. Public X.509 certificate uploaded via SR
      3. Oracle’s Cloud Operations exports the fusion public X.509 CSR certificate and uploads it to SR
      4. Using Fusion public X.509 CSR certificate, Customer provides signed CA certificate and uploads it to SR
    5. Oracle’s Cloud Operations provides the Fusion PGP Public Certificate to you via an SR

 

Modification to Loader Integration Service Payload to support PGP

The loaderIntegrationService has a new method called “submitEncryptedBatch” which has an additional parameter named “encryptType”. The valid values to pass in the “encryptType” parameter are taken from the ORA_HRC_FILE_ENCRYPT_TYPE lookup:

  • NONE
  • PGPSIGNED
  • PGPUNSIGNED
  • PGPX509SIGNED
  • PGPX509UNSIGNED

Sample Payload

<soap:Envelope xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”> <soap:Body>
<ns1:submitEncryptedBatch
xmlns:ns1=”http://xmlns.oracle.com/apps/hcm/common/batchLoader/core/loaderIntegrationService/types/”>
<ns1:ZipFileName>LOCATIONTEST622.ZIP</ns1:ZipFileName>
<ns1:BusinessObjectList>Location</ns1:BusinessObjectList>
<ns1:BatchName>LOCATIONTEST622.ZIP</ns1:BatchName>
<ns1:LoadType>FBL</ns1:LoadType>
<ns1:AutoLoad>Y</ns1:AutoLoad>
<ns1:encryptType>PGPX509SIGNED</ns1:encryptType>
</ns1:submitEncryptedBatch>
</soap:Body>
</soap:Envelope>

 

Outbound Flow

This is a typical Outbound batch Integration flow using HCM Extracts:

extractflow

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  1. Select HCM Delivery Type to “HCM Connect”
  2. Select an Encryption Mode of the 4 supported encryption types. or select None
  3. Specify the Integration Name – his value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:

  • Author: FUSION_APPSHCM_ESS_APPID
  • Security Group: FAFusionImportExport
  • Account: hcm/dataloader/export
  • Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Programmatic Approach to export/import files from/to WebCenter Content

In Fusion Applications, the WebCenter Content Managed server is installed in the Common domain Weblogic Server. The WebCenter Content server provides two types of web services:

Generic JAX-WS based web service

This is a generic web service for general access to the Content Server. The context root for this service is “/idcws”. For details of the format, see the published WSDL at https://<hostname>:<port>/idcws/GenericSoapPort?WSDL. This service is protected through Oracle Web Services Security Manager (OWSM). As a result of allowing WS-Security policies to be applied to this service, streaming Message Transmission Optimization Mechanism (MTOM) is not available for use with this service. Very large files (greater than the memory of the client or the server) cannot be uploaded or downloaded.

Native SOAP based web service

This is the general WebCenter Content service. Essentially, it is a normal socket request to Content Server, wrapped in a SOAP request. Requests are sent to the Content Server using streaming Message Transmission Optimization Mechanism (MTOM) in order to support large files. The context root for this service is “/idcnativews”. The main web service is IdcWebRequestPort and it requires JSESSIONID, which can be retrieved from IdcWebLoginPort service.

The Remote Intradoc Client (RIDC) uses the native web services. Oracle recommends that you do not develop a custom client against these services.

For more information, please refer “Developing with WebCenter Content Web Services for Integration.”

Generic Web Service Implementation

This post provides a sample of implementing generic web service /idcws/GenericSoapPort. In order to implement this web service, it is critical to review the following definitions to generate the request message and parse the response message:

IdcService:

IdcService is a predefined service node’s attribute that is to be executed, for example, CHECKIN_UNIVERSAL, GET_SEARCH_RESULTS, GET_FILE, CHECKOUT_BY_NAME, etc.

User

User is a subnode within a <service> and contains all user information.

Document

Document is a collection of all the content-item information and is the parent node of the all the data.

ResultSet

ResultSet is a typical row/column based schema. The name attribute specifies the name of the ResultSet. It contains set of row subnodes.

Row

Row is a typical row within a ResultSet, which can have multiple <row> subnodes. It contains sets of Field objects

Field

Field is a subnode of either <document> or <row>. It represents document or user metadata such as content Id, Name, Version, etc.

File

File is a file object that is either being uploaded or downloaded

For more information, please refer Configuring Web Services with WSDL, SOAP, and the WSDL Generator.

Web Service Security

The genericSoapPort web service is protected by Oracle Web Services Manager (OWSM). In Oracle Fusion Applications cloud, the OWSM policy is: “oracle/wss11_saml_or_username_token_with_message_protection_service_policy”.

In your SOAP envelope, you will need the appropriate “wsee” headers. This is a sample:

<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">
<saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" MajorVersion="1" MinorVersion="1" AssertionID="SAML-iiYLE6rlHjI2j9AUZXrXmg22" IssueInstant="2014-10-20T13:52:25Z" Issuer="www.oracle.com">
<saml:Conditions NotBefore="2014-10-20T13:52:25Z" NotOnOrAfter="2015-11-22T13:57:25Z"/>
<saml:AuthenticationStatement AuthenticationInstant="2014-10-20T14:52:25Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">FAAdmin</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:sender-vouches</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AuthenticationStatement>
</saml:Assertion>
</wsse:Security>
</soapenv:Header>

Sample SOA Composite

The SOA code provides a sample on how to search for a document in WebCenter Content, extract a file name from the search result, and get the file and save it in your local directory. The file could be processed immediately based on your requirements. Since this is a generic web service with a generic request message, you can use the same interface to invoke various IdcServices, such as GET_FILE, GET_SEARCH_RESULTS, etc.

In the SOA composite sample, two external services are created: GenericSoapPort and FileAdapter. If the service is GET_FILE, then it will save a copy of the retrieved file in your local machine.

Export File

The GET_FILE service returns a specific rendition of a content item, the latest revision, or the latest released revision. A copy of the file is retrieved without performing a check out. It requires either dID (content item revision ID) for the revision, or dDocName (content item name) along with a RevisionSelectionMethod parameter. The RevisionSelectionMethod could be either “Latest” (latest revision of the content) or “LatestReleased” (latest released revision of the content). For example, to retrieve file:

<ucm:GenericRequest webKey="cs">
<ucm:Service IdcService="GET_FILE">
<ucm:Document>
<ucm:Field name="dID">401</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>

Search File

The dID of the content could be retrieved using the service GET_SEARCH_RESULTS. It uses a QueryText attribute in <Field> node. The QueryText attribute defines the query and must be XML encoded. You can append values for title, content Id, and so on, in the QueryText, to refine the search. The syntax for QueryText could be challenging, but once you understand the special characters formats, it is straight forward. For example, to search content by its original name:

<ucm:Service IdcService="GET_SEARCH_RESULTS">
<ucm:Document>
<ucm:Field name="QueryText">dOriginalName &lt;starts&gt; `Test`</ucm:Field>
</ucm:Document>
</ucm:Service>

In plain text, it is dOriginalName <starts> `Test`. The <substring> is the mandatory format. You can further refine the query by adding more parameters.

This a sample SOA composite with 2 external references, genericSoapPort and FileAdapter.

ucmComposite

This is a sample BPEL process flow that demonstrates how to retrieve the file and save a copy to a local directory using File Adapter. If the idcService is GET_SEARCH_RESULTS, then do not save the file. In a real scenario, you will search, check out and start processing the file.

 

ucmBPEL1

The original file name is preserved when copying it to a local directory by passing the header property to the FileAdapter. For example, create a variable fileName and use assign as follows:

1. get file name from the response message in your <assign> activity as follows:

<from expression="bpws:getVariableData('InvokeGenericSoapPort_GenericSoapOperation_OutputVariable','GenericResponse','/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name=&quot;dOriginalName&quot;]')"/>
<to variable="fileName"/>

Please make note of the XPath expression as this will assist you to retrieve other metadata.

2. Pass this fileName variable to the <invoke> of the FileAdapter as follows:

<bpelx:inputProperty name="jca.file.FileName" variable="fileName"/>

Please add the following property manually to the ../CommonDomain/ucm/cs/config/config.cfg file for the QueryText syntax: AllowNativeQueryFormat=true
Restart the managed server.
The typical error is: “StatusMessage”>Unable to retrieve search results. Parsing error at character xx in query….”

Testing SOA Composite:

After the composite is deployed in your SOA server, you can test it either from Enterprise Manager (EM) or using SoapUI. These are the sample request messages for GET_SEARCH_RESULTS and GET_FILE.

The following screens show the SOA composites for “GET_SEARCH_RESULTS” and “GET_FILE”:

searchfile

getfile

Get_File Response snippet with critical objects:

<ns2:GenericResponse xmlns:ns2="http://www.oracle.com/UCM">
<ns2:Service IdcService="GET_FILE">
<ns2:Document>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="IdcService">GET_FILE</ns2:Field>
....
<ns2:ResultSet name="FILE_DOC_INFO">
<ns2:Row>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="dDocName">UCMFA000401</ns2:Field>
<ns2:Field name="dDocType">Document</ns2:Field>
<ns2:Field name="dDocTitle">JRD Test</ns2:Field>
<ns2:Field name="dDocAuthor">FAAdmin</ns2:Field>
<ns2:Field name="dRevClassID">401</ns2:Field>
<ns2:Field name="dOriginalName">Readme.html</ns2:Field>
</ns2:Row>
</ns2:ResultSet>
</ns2:ResultSet>
<ns2:File name="" href="/u01/app/fa/config/domains/fusionhost.mycompany.com/CommonDomain/ucm/cs/vault/document/bwzh/mdaw/401.html">
<ns2:Contents>
<xop:Include href="cid:7405676a-11f8-442d-b13c-f8f6c2b682e4" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
</ns2:Contents>
</ns2:File>
</ns2:Document>
</ns2:Service>
</ns2:GenericResponse>

Import (Upload) File for HDL

The above sample can also be use to import files into the WebCenter Content repository for Inbound integration or other use cases. The service name is CHECKIN_UNIVERSAL.

Summary

This post demonstrates how to secure and automate the export and import of data files in WebCenter Content server implemented by Fusion HCM Cloud. It further demonstrates how integration tools like SOA can be implemented to automate, extend and orchestrate integration between HCM in the cloud and Oracle or non-Oracle applications, either in Cloud or on-premise sites.

The SOA sample code is here.

Fusion HCM Cloud – Bulk Integration Automation Using Managed File Transfer (MFT) and Node.js

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from the cloud.

The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion Human Capital Management (Oracle Fusion HCM). HDL supports one-time data migration and incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT).

HCM Extracts is an outbound integration tool that lets you choose HCM data, gathers it from the HCM database and archives it as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns.

Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal systems and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use, especially for non technical staff, so you can leverage more resources to manage the transfer of files. The built in extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

This post focuses on how to automate HCM Cloud batch integration using MFT (Managed File Transfer) and Node.js. MFT can receive files, decrypt/encrypt files and invoke Service Oriented Architecture (SOA) composites for various HCM integration patterns.

 

Main Article

Managed File Transfer (MFT)

Oracle Managed File Transfer (MFT) is a high performance, standards-based, end-to-end managed file gateway. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes file encryption, scheduling, and embedded FTP and sFTP servers.

Oracle MFT provides built-in compression, decompression, encryption and decryption actions for transfer pre-processing and post-processing. You can create new pre-processing and post-processing actions, which are called callouts.

The callouts can be associated with either the source or the target. The sequence of processing action execution during a transfer is as follows:

  1. 1. Source pre processing actions
  2. 2. Target pre processing actions
  3. 3. Payload delivery
  4. 4. Target post processing actions
Source Pre-Processing

Source pre-processing is triggered right after a file has been received and has identified a matching Transfer. This is the best place to do file validation, compression/decompression, encryption/decryption and/or extend MFT.

Target Pre-Processing

Target pre-processing is triggered just before the file is delivered to the Target by the Transfer. This is the best place to send files to external locations and protocols not supported in MFT.

Target Post-Processing

Post-processing occurs after the file is delivered. This is the best place for notifications, analytic/reporting or maybe remote endpoint file rename.

For more information, please refer to the Oracle MFT document

 

HCM Inbound Flow

This is a typical Inbound FBL/HDL process flow:

inbound_mft

The FBL/HDL process for HCM is a two-phase web services process as follows:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke “LoaderIntegrationService” or “HCMDataLoader” to initiate the loading process.

The following diagram illustrates the MFT steps with respect to “Integration” for FBL/HDL:

inbound_mft_2

HCM Outbound Flow

This is a typical outbound batch Integration flow using HCM Extracts:

extractflow

 

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS) – this report is stored in WCC under the hcm/dataloader/export account.
  • MFT scheduler can pull files from WCC
  • The data file(s) are either uploaded to the customer’s sFTP server as pass through or to Integration tools such as Service Oriented Architecture (SOA) for orchestrating and processing data to target applications in cloud or on-premise.

The following diagram illustrates the MFT orchestration steps in “Integration” for Extract:

 

outbound_mft

 

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  • Select HCM Delivery Type to “HCM Connect”
  • Select an Encryption Mode of the four supported encryption types or select None
  • Specify the Integration Name – this value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:
Author: FUSION_APPSHCM_ESS_APPID
Security Group: FAFusionImportExport
Account: hcm/dataloader/export
Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Fusion Applications Security

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.
The FBL/HDL web services are secured through Oracle Web Service Manager (OWSM) using the following policy: oracle/wss11_saml_or_username_token_with_message_protection_service_policy.

The client must satisfy the message protection policy to ensure that the payload is encrypted or sent over the SSL transport layer.

A client policy that can be used to meet this requirement is: “oracle/wss11_username_token_with_message_protection_client_policy”

To use this policy, the message must be encrypted using a public key provided by the server. When the message reaches the server it can be decrypted by the server’s private key. A KeyStore is used to import the certificate and it is referenced in the subsequent client code.

The public key can be obtained from the certificate provided in the service WSDL file.

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion HCM supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

MFT Callout using Node.js

 

Prerequisites

To automate HCM batch integration patterns, the following components must be installed and configured respectively:

 

Node.js Utility

A simple Node.js utility “mft2hcm” has been developed for uploading or downloading files to/from a MFT server callout to Oracle WebCenter Content server and initiate HCM SaaS loader service. It utilizes the node “mft-upload” package and provides SOAP substitution templates for WebCenter (UCM) and Oracle HCM Loader service.

Please refer to the “mft2hcm” node package for installation and configuration.

RunScript

The RunScript is configured as “Run Script Pre 01” to configure a callout that can be injected into MFT in pre or post processing. This callout always sends the following default parameters to the script:

  • Filename
  • Directory
  • ECID
  • Filesize
  • Targetname (not for source callouts)
  • Sourcename
  • Createtime

Please refer to “PreRunScript” for more information on installation and configuration.

MFT Design

MFT Console enables the following tasks depending on your user roles:

Designer: Use this page to create, modify, delete, rename, and deploy sources, targets, and transfers.

Monitoring: Use this page to monitor transfer statistics, progress, and errors. You can also use this page to disable, enable, and undeploy transfer deployments and to pause, resume, and resubmit instances.

Administration: Use this page to manage the Oracle Managed File Transfer configuration, including embedded server configuration.

Please refer to the MFT Users Guide for more information.

 

HCM FBL/HDL MFT Transfer

This is a typical MFT transfer design and configuration for FBL/HDL:

MFT_FBL_Transfer

The transfer could be designed for additional steps such as compress file and/or encrypt/decrypt files using PGP, depending on the use cases.

 

HCM FBL/HDL (HCM-MFT) Target

The MFT server receives files from any Source protocol such as SFTP, SOAP, local file system or a back end integration process. The file can be decrypted, uncompressed or validated before a Source or Target pre-processing callout uploads it to UCM then notifies HCM to initiate the batch load. Finally the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion.

This is a typical target configuration in the MFT-HCM transfer:

Click on target Pre-Processing Action and select “Run Script Pre 01”:

MFT_RunScriptPre01

 

Enter “scriptLocation” where node package “mft2hcm” is installed. For example, <Node.js-Home>/hcm/node_modules/mft2hcm/mft2hcm.js

MFTPreScriptUpload

 

Do not check ”UseFileFromScript”. This property replaces an inbound file (source) of MFT with the file from target execution. In FBL/HDL, the response (target execution) do not contain file.

 

HCM Extract (HCM-MFT) Transfer

An external event or scheduler triggers the MFT server to search for a file in WCC using a search query. Once a document id is indentified, it is retrieved using a “Source Pre-Processing” callout which injects the retrieved file into the MFT Transfer. The file can then be decrypted, validated or decompressed before being sent to an MFT Target of any protocol such as SFTP, File system, SOAP Web Service or a back end integration process. Finally, the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion. The MFT server can live in either on premise or a cloud iPaaS hosted environment.

This is a typical configuration of HCM-MFT Extract Transfer:

MFT_Extract_Transfer

 

In the Source definition, add “Run Script Pre 01” processing action and enter the location of the script:

MFTPreScriptDownload

 

The “UseFileFromScript” must be checked as the source scheduler is triggered with mft2hcm payload (UCM-PAYLOAD-SEARCH) to initiate the search and get WCC’s operations. Once the file is retrieved from WCC, this flag tells MFT engine to substitute the file from downloaded from WCC.

 

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using MFT and Node.js. The Node.js package could be replaced with WebCenter Content native APIs and SOA for orchestration. This process can also be replicated for other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

HCM Atom Feed Subscriber using Node.js

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Node.js. The assumption is that the reader has some basic knowledge on Node.js. Please refer to this link to download and install Node.js in your environment.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. It runs on a single threaded event loop and leverages asynchronous calls for various operations such as I/O. This is an evolution from stateless-web based on the stateless request-response paradigm. For example, when a request is sent to invoke a service such as REST or a database query, Node.js will continue serving the new requests. When a response comes back, it will jump back to the respective requestor. Node.js is lightweight and provides a high level of concurrency. However, it is not suitable for CPU intensive operations as it is single threaded.

Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

For more information on Node.js, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

Refer my blog on how to invoke secured REST services using Node.js

Security

The RESTFul services in Oracle HCM Cloud are protected with Oracle Web Service Manager (OWSM). The server policy allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

The client must provide one of the above policies in the security headers of the invocation call for authentication. The sample in this post is using HTTP Basic Authentication over SSL policy.

 

Fusion Security Roles

REST and Atom Feed Roles

To use Atom feed, a user must have any HCM Cloud role that inherits the following roles:

  • “HCM REST Services and Atom Feeds Duty” – for example, Human Capital Management Integration Specialist
  • “Person Management Duty” – for example, Human Resource Specialist

REST/Atom Privileges

 

Privilege Name

Resource and Method

PER_REST_SERVICE_ACCESS_EMPLOYEES_PRIV emps ( GET, POST, PATCH)
PER_REST_SERVICE_ACCESS_WORKSTRUCTURES_PRIV grades (get)jobs (get)
jobFamilies (get)
positions (get)
locations (get)
organizations (get)
PER_ATOM_WORKSPACE_ACCESS_EMPLOYEES_PRIV employee/newhire (get)
employee/termination (get)
employee/empupdate (get)
employee/empassignment (get )
PER_ATOM_WORKSPACE_ACCESS_WORKSTRUCTURES_PRIV workstructures/grades (get)
workstructures/jobs (get)
workstructures/jobFamilies (get)
workstructures/positions (get)
workstructures/locations (get)
workstructures/organizations (get)

 

 

Atom Payload Response Structure

The Atom feed response is in XML format. Please see the following diagram to understand the feed structure:

 

AtomFeedSample_1

 

A feed can have multiple entries. The entries are ordered by “updated” timestamp of the <entry> and the first one is the latest. There are two critical elements that will provide information on how to process these entries downstream.

Content

The <content> element contains critical attributes such as Employee Number, Phone, Suffix, CitizenshipLegislation, EffectiveStartDate, Religion, PassportNumber, NationalIdentifierType, , EventDescription, LicenseNumber, EmployeeName, WorkEmail, NationalIdentifierNumber. It is in JSON format as you can see from the above diagram.

Resource Link

If data provided in the <content> is not sufficient, the RESTFul service resource link is provided to get more details. Please refer the above diagram on employee resource link for each entry. Node.js can invoke this newly created RestFul resource link.

 

Avoid Duplicate Atom Feed Entries

To avoid consuming feeds with duplicate entries, one of the following parameters must be provided to consume feeds since last polled:

1. updated-min: Returns entries within collection  Atom:updated > updated-min

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z – Return entries published after “2015-09-16T09:16:00.000Z”.

2. updated-max: Returns entries within collection Atom:updated <=updated-max

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-max=2015-09-16T09:16:00.000Z – Return entries published at/before “2015-09-16T09:16:00.000Z”.

3. updated-min=&updated-max: Return entries within collection (Atom:updated > updated-min && Atom:updated <=updated-max)

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z&updated-max=2015-09-11T10:03:35.000Z – Return entries published between “2015-09-11T10:03:35.000Z” and “2015-09-16T09:16:00.000Z”.

Node.js Implementation

Refer my blog on how to invoke secured REST services using Node.js. These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a file.

For example:

//persist timestamp for the next call

if (i == 0) {

fs.writeFile('updateDate', updateDate[0].text, function(fserr) {

if (fserr) throw fserr; } );

}

 

Next Call

In next call, read the updated timestamp value from the above persisted file to generate the path as follows:

//Check if updateDate file exists and is not empty
try {

var lastFeedUpdateDate = fs.readFileSync('updateDate');

console.log('Last Updated Date is: ' + lastFeedUpdateDate);

} catch (e) {

// handle error

}

if (lastFeedUpdateDate.length > 0) {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;

} else {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire';

}

 

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the “node-elementtree” package is implemented to parse the XML. You can use any library as long as the following data are extracted for each entry in the feed for downstream processing.

var et = require('elementtree');
//Request call
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Feed Response - the structure is defined in section: Atom Payload Response Structure
feed = et.parse(body);

//Identify if feed has any entries
var numberOfEntries = feed.findall('./entry/').length;

//if there are entries, extract data for downstream processing
if (numberOfEntries > 0) {
console.log('Get Content for each Entry');

//Get Data based on XPath Expression
var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i > content.length; i++ ) {

//get Resouce link for the respected entry
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));

//get Content data of the respective entry which in JSON format
console.log(feed.findall('content.text'));
 
//persist timestamp for the next call
if (i == 0) {
  fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
  if (fserr) throw fserr; } );

}

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Downstream Processing Pattern

The node.js scheduler can be implemented to consume feeds periodically. Once the message is parsed, there are several patterns to support various use cases. In addition, you could have multiple subscribers such as Employee new hire, Employee termination, locations, jobs, positions, etc. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This pattern will provide global transaction and recovery when downstream applications are not available or throws error. The following diagram shows the high level architecture:

nodejs_soa_atom_pattern

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

var et = require('elementtree');

var uname = 'username';
var pword = 'password';
var http = require('https'),
fs = require('fs');

var XML = et.XML;
var ElementTree = et.ElementTree;
var element = et.Element;
var subElement = et.SubElement;

var lastFeedUpdateDate = '';
var pathUri = '';

//Check if updateDate file exists and is not empty
try {
var lastFeedUpdateDate = fs.readFileSync('updateDate');
console.log('Last Updated Date is: ' + lastFeedUpdateDate);
} catch (e) {
// add error logic
}

//get last feed updated date to get entries since that date
if (lastFeedUpdateDate.length > 0) {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;
} else {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire';
}

// Generate Request Options
var options = {
ca: fs.readFileSync('HCM Cert'), //get HCM Cloud certificate - either through openssl or export from web browser
host: 'HCMHostname',
port: 443,
path: pathUri,
"rejectUnauthorized" : false,
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
};

//Invoke REST resource for Employee New Hires
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Atom Payload response 
feed = et.parse(body);

//Get Entries count
var numberOfEntries = feed.findall('./entry/').length;

console.log('...................Feed Extracted.....................');
console.log('Numer of Entries: ' + numberOfEntries);

//Process each entry
if (numberOfEntries > 0) {

console.log('Get Content for each Entry');

var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i < content.length; i++ ) {
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));
console.log(feed.findall('content.text'));

//persist timestamp for the next call
if (i == 0) {
fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
if (fserr) throw fserr; } );
}

fs.writeFile(entryId[i].text,content[i].text, function(fserr) {
if (fserr) throw fserr; } );
}
}

})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

 

 

HCM Atom Feed Subscriber using SOA Cloud Service

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Oracle Service Oriented Architecture (SOA) Cloud Service. Oracle SOA Cloud Service provides a PaaS computing platform solution for running Oracle SOA Suite, Oracle Service Bus, and Oracle API Manager in the cloud. For more information on SOA Cloud Service, please refer this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based connectivity to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure.

For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

HCM Atom Introduction

For Atom “security, roles and privileges”, please refer my blog HCM Atom Feed Subscriber using Node.js.

 

Atom Feed Response Template

 

AtomFeedSample_1

SOA Cloud Service Implementation

Refer my blog on how to invoke secured REST services using SOA. The following diagram shows the patterns to subscribe to HCM Atom feeds and process it to downstream applications that may have either web services or file based interfaces. Optionally, all entries from the feeds could be staged either in database or messaging cloud before processing it during events such as downstream application is not available or throwing system errors. This provides the ability to consume the feeds, but hold the processing until downstream applications are available. Enterprise Scheduler Service (ESS), a component of SOA Suite, is leveraged to invoke the subscriber composite periodically.

 

soacs_atom_pattern

The following diagram shows the implementation of the above pattern for Employee New Hire:

soacs_atom_composite

 

Feed Invocation from SOA

HCM cloud feed though in XML representation, the media type of the payload response is “application/atom+xml”. This media type is not supported at this time, but use the following java embedded activity in your BPEL component:

Once the built-in REST Adapter supports the Atom media type, java embedded activity will be replaced and further simplify the solution.

try {

String url = "https://mycompany.oraclecloud.com";
String lastEntryTS = (String)getVariableData("LastEntryTS");
String uri = "/hcmCoreApi/atomservlet/employee/newhire";

//Generate URI based on last entry timestamp from previous invocation
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

java.net.URL obj = new URL(null,url+uri, new sun.net.www.protocol.https.Handler());

javax.net.ssl.HttpsURLConnection conn = (HttpsURLConnection) obj.openConnection();
conn.setRequestProperty("Content-Type", "application/vnd.oracle.adf.resource+json");
conn.setDoOutput(true);
conn.setRequestMethod("GET");

String userpass = "username" + ":" + "password";
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes("UTF-8"));
conn.setRequestProperty ("Authorization", basicAuth);

String response="";
int responseCode=conn.getResponseCode();
System.out.println("Response Code is: " + responseCode);

if (responseCode == HttpsURLConnection.HTTP_OK) {

BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));

String line;
String contents = "";

while ((line = reader.readLine()) != null) {
contents += line;
}

setVariableData("outputVariable", "payload", "/client:processResponse/client:result", contents);

reader.close();

}

} catch (Exception e) {
e.printStackTrace();
}

 

These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a database cloud (DbaaS).

This is the sample database table

create table atomsub (
id number,
feed_ts varchar2(100) );

For initial consumption, keep the table empty or add a row with the value of feed_ts to consume initial feeds. For example, the feed_ts value could be “2015-09-16T09:16:00.000Z” to get all the feeds after this timestamp.

In SOA composite, you will update the above table to persist the “/entry/updated” timestamp in the feed_ts column of the “atomsub” table.

 

Next Call

In next call, read the updated timestamp value from the database and generate the URI path as follows:

String uri = "/hcmCoreApi/atomservlet/employee/newhire";
String lastEntryTS = (String)getVariableData("LastEntryTS");
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

The above step is done in java embedded activity, but it could be done in SOA using <assign> expressions.

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the feed response is stored in output variable as a string. The following expression in <assign> activity will convert it to XML

oraext:parseXML($outputVariable.payload/client:result)


Parsing Each Atom Entry for Downstream Processing

Each entry has two major elements as mentioned in Atom response payload structure.

Resource Link

This contains the REST employee resource link to get Employee object. This is a typical REST invocation from SOA using REST Adapter. For more information on invoking REST services from SOA, please refer my blog.

 

Content Type

This contains selected resource data in JSON format. For example: “{  “Context” : [ {    “EmployeeNumber” : “212”,    “PersonId” : “300000006013981”,    “EffectiveStartDate” : “2015-10-08”,    “EffectiveDate” : “2015-10-08”,    “WorkEmail” : “phil.davey@mycompany.com”,    “EmployeeName” : “Davey, Phillip”  } ]}”.

In order to use above data, it must be converted to XML. The BPEL component provides a Translator activity to transform JSON to XML. Please refer the SOA Development document, section B1.8 – doTranslateFromNative.

 

The <Translate> activity syntax to convert above JSON string from <content> is as follows:

<assign name="TranslateJSON">
<bpelx:annotation>
<bpelx:pattern>translate</bpelx:pattern>
</bpelx:annotation>
<copy>
 <from>ora:doTranslateFromNative(string($FeedVariable.payload/ns1:entry/ns1:content), 'Schemas/JsonToXml.xsd', 'Root-Element', 'DOM')</from>
 <to>$JsonToXml_OutputVar_1</to>
 </copy>
</assign>

This is the output:

jsonToXmlOutput

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select JSON format and use above <content> as a sample to generate a schema. Please see the following diagrams:

JSON_nxsd_1JSON_nxsd_2JSON_nxsd_3

JSON_nxsd_5

 

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Scheduler and Downstream Processing

Oracle Enterprise Scheduler Service (ESS) is configured to invoke the above composite periodically. At present, SOA cloud service is not provisioned with ESS, but refer this to extend your domain. Once the feed response message is parsed, you can process it to downstream applications based on your requirements or use cases. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This will provide global transaction and recovery when downstream applications are not available or throws error.

The following diagram shows how to create job definition for a SOA composite. For more information on ESS, please refer this.

ess_3

SOA Cloud Service Instance Flows

First invocation without updated-min argument to get all the feeds

 

soacs_atom_instance_json

Atom Feed Response from above instance

AtomFeedResponse_1

 

Next invocation with updated-min argument based on last entry timestamp

soacs_atom_instance_noentries

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

The sample prototype code is available here.

 

soacs_atom_composite_1

 

 

Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).


Automating Data Loads from Taleo Cloud Service to BI Cloud Service (BICS)

$
0
0

Introduction

This article will outline a method for extracting data from Taleo Cloud Service, and automatically loading that data into BI Cloud Service (BICS).  Two tools will be used, the Taleo Connect Client, and the BICS Data Sync Utility.   The Taleo Connect Client will be configured to extract data in CSV format from Taleo, and save that in a local directory.  The Data Sync tool will monitor that local directory, and once the file is available, it will load the data into BICS using an incremental load strategy.  This process can be scheduled to run, or run on-demand.

 

Main Article

This article will be broken into 3 sections.

1. Set-up and configuration of the Taleo Connect Client,

2. Set-up and configuration of the Data Sync Tool,

3. The scheduling and configuration required so that the process can be run automatically and seamlessly.

 

1. Taleo Connect

The Taleo Connect Tool communicates with the Taleo backend via web services and provides an easy to use interface for creating data exports and loads.

Downloading and Installing

Taleo Connect tool can be downloaded from Oracle Software Delivery Cloud.

a. Search for ‘Oracle Taleo Platform Cloud Service – Connect’, and then select the Platform.  The tool is available for Microsoft Windows and Linux.

1

 

b. Click through the agreements and then select the ‘Download All’ option.

 

1

c. Extract the 5 zip files to a single directory.

d. Run the ‘Taleo Connect Client Application Installer’

2

e. If specific Encryption is required, enter that in the Encryption Key Server Configuration screen, or select ‘Next’ to use default encryption.

f. When prompted for the Product Packs directory, select the ‘Taleo Connect Client Application Data Model’ folder that was downloaded and unzipped in the previous step, and then select the path for the application to be installed into.

 

Configuring Taleo Connect

a. Run the Taleo Connect Client.  By default in windows, it is installed in the “C:\Taleo Connect Client” directory.  The first time the tool is run, a connection needs to be defined.  Subsequent times this connection will be used by default.

b. Enter details of the Taleo environment and credentials.  Important – the user must have the ‘Integration Role’ to be able to use the Connect Client.

c. Select the Product and correct version for the Taleo environment.  In this example ‘Recruiting 14A’.

d. Select ‘Ping’ to confirm the connection details are correct.

3

 

Creating Extracts from Taleo

Exporting data with Taleo Connect tool requires an export definition as well as an export configuration.  These are saved as XML files, and can then be run from a command line to execute the extract.

This article will walk through very specific instructions for this use case.  More details on the Connect Client can be found in this article.

1. Create The Export Definition

a. Under the ‘File’ menu, select ‘New Export Wizard’

1

b. Select the Product and Model, and then the object that you wish to export.  In this case ‘Department’ is selected.

Windows7_x64

c. To select the fields to be included in the extract, chose the ‘Projections’ workspace tab, as shown below, and then drag the fields from the Entity Structure into that space.  In this example the whole ‘Department’ tree is dragged into the Projections section, which brings all the fields in automatically.

 

Windows7_x64

d. There are options to Filter and Sort the data, as well as Advanced Options, which include using sub-queries, grouping, joining, and more advanced filtering.  For more information on these, see the Taleo Product Documentation.  In the case of a large transaction table, it may be worth considering building a filter that only extracts the last X period of data, using the LastModifiedDate field, to limit the size of the file created and processed each time.  In this example, the Dataset is small, so a full extract will be run each time.

 

Windows7_x64

e. Check the ‘CSV header present’ option.  This adds the column names as the first row of the file, which makes it easier to set up the source in the Data Sync tool.

Windows7_x64

f. Once complete, save the Export Definition with the disk icon, or under the ‘File’ menu.

 

2. Create The Export Configuration

a. Create the Export Configuration, by selecting ‘File’ and the ‘New Configuration Wizard’.

6

b. Base the export specification on the Export Definition created in the last step.

Windows7_x64

c. Select the Default Endpoint, and then ‘Finish’.

8

d. By default the name of the Response, or output file, is generated using an identifier, with the Identity name – in this case Department – and a timestamp.  While the Data Sync tool can handle this type of file name with a wildcard, in this example the ‘Pre-defined value’ is selected so that the export creates the same file each time – called ‘Department.csv’.

Windows7_x64

e. Save the Export Configuration.  This needs to be done before the schedule and command line syntax can be generated.

f. To generate the operating system dependent syntax to run the extract from a command line, check the ‘Enable Schedule Monitoring’ on the General tab, then ‘Click here to configure schedule’.

g. Select the operating system, and interval, and then ‘Build Command Line’.

h. The resulting code can be Copied to the clipboard.  Save this.  It will be used in the final section of the article to configure the command line used by the scheduler to run the Taleo extract process.

Windows7_x64

i.  Manually execute the job by selecting the ‘gear’ icon

 

Menubar

 

j. Follow the status in the monitoring window to the right hand side of the screen.

In this example, the Department.csv file was created in 26 seconds.  This will be used in the next step with the Data Sync tool.

Windows7_x64

 

2. Data Sync Tool

The Data Sync Tool can be downloaded from OTN through this link.

For more information on installing and configuring the tool, see this post that I wrote last year.  Use this to configure the Data Sync tool, and to set up the TARGET connection for the BICS environment where the Taleo data will be loaded.

 

Configuring the Taleo Data Load

a. Under “Project” and “File Data”, create a new source file for the ‘Department.csv’ file created by the Taleo Connect tool.

1

Windows7_x64

b. Under ‘Import Options’, manually enter the following string for the Timestamp format.

yyyy-MM-dd’T’HH:mm:ssX

This is the format that the Taleo Extract uses, and this needs to be defined within the Data Sync tool so that the CSV file can be parsed correctly.

1

c. Enter the name of the Target table in BICS.  In this example, a new table called ‘TALEO_DEPARTMENT’ will be created.

Windows7_x64

d. The Data Sync tool samples the data and makes a determination of the correct file format for each column.  Confirm these are correct and change if necessary.

Windows7_x64

e. If a new table is being created in BICS as part of this process, it is often a better idea to let the Data Sync tool create that table so it has the permissions it requires to load data and create any necessary indexes.  Under ‘Project’ / ‘Target Tables’ right click on the Target table name, and select ‘Drop/Create/Alter Tables’

Windows7_x64

f. In the resulting screen, select ‘Create New’ and hit OK.  The Data Sync tool will connect to the BICS Target environment and execute the SQL required to create the TALEO_DEPARTMENT target table

2

g. If an incremental load strategy is required, select the ‘Update table’ option as shown below

Windows7_x64

h. Select the unique key on the table – in this case ‘Number’

Windows7_x64

i. Select the ‘LastModifiedDate’ for the ‘Filters’ section.  Data Sync will use this to identify which records have changed since the last load.

Windows7_x64

In this example, the Data Sync tool suggests a new Index on the target table in BICS.  Click ‘OK’ to let it generate that on the Target BICS database.

Windows7_x64

 

Create Data Sync Job

Under ‘Jobs’, select ‘New’ and name the job.  Make a note of the Job name, as this will be used later in the scheduling and automation of this process

 

Windows7_x64

 

Run Data Sync Job

a. Execute the newly created Job by selecting the ‘Run Job’ button

Windows7_x64

b. Monitor the progress under the ‘Current Jobs’ tab.

Windows7_x64

c. Once the job completes, go the the ‘History’ tab, select the job, and then in the bottom section of the screen select the ‘Tasks’ tab to confirm everything ran successfully.  In this case the ‘Status Description’ confirms the job ‘Successfully completed’ and that 1164 rows were loading into BICS, with 0 Failed Rows.  Investigate any errors and make changes before continuing.

Windows7_x64

 

3. Configuring and Scheduling Process

As an overview of the process, a ‘.bat’ file will be created and scheduled to run.  This ‘bat’ file will execute the extract from Taleo, with that CSV file being saved to the local file system.  The second step in the ‘.bat’ file will create a ‘touch file’.  The Data Sync Tool will monitor for the ‘touch file’, and once found, will start the load process.  As part of this, the ‘touch file’ will automatically be deleted by the Data Sync tool, so that the process is not started again until a new CSV file from Taleo is generated.

a. In a text editor, create a ‘.bat’ file.  In this case the file is called ‘Taleo_Department.bat’.

b. Use the syntax generated in step ‘2 h’ in the section where the ‘Taleo Export Configuration’ was created.

c. Use the ‘call’ command before this command.  Failure to do this will result in the extract being completed, but the next command in the ‘.bat’ file not being run.

d. Create the ‘touch file’ using an ‘echo’ command.  In this example a file called ‘DS_Department_Trigger.txt’ file will be created.

Windows7_x64

e. Save the ‘bat’ file.

f. Configure the Data Sync tool to look for the Touch File created in step d, by editing the ‘on_demand_job.xml’, which can be found in the ‘conf-shared’ directory within the Data Sync main directory structure.

Windows7_x64

g. At the bottom of the file in the ‘OnDemandMonitors’ section, change the ‘pollingIntervalInMinutes’ to be an appropriate value. In this case Data Sync will be set to check for the Touch file every minute.

h. Add a line within the <OnDemandMonitors> section to define the Data Sync job that will be Executed once the Touch file is found, and the name and path of the Touch file to be monitored.

Windows7_x64

In this example, the syntax looks like this

<TriggerFile job=”Taleo_Load” file=”C:\Users\oracle\Documents\DS_Department_Trigger.txt”/>

 

The Data Sync tool can be configured to monitor for multiple touch files, each that would trigger a different job.  A separate line item would be required for each.

h. The final step is to Schedule the ‘.bat’ file to run at a suitable interval.  Within Windows, the ‘Task Scheduler’ can be found beneath the ‘Accessories’ / ‘System Tools’ section under the ‘All Programs’ menu.  In linux, use the ‘crontab’ command.

 

Summary

This article walked through the steps for configuring the Taleo Connect Client to download data from Taleo and save to a location to be automatically consumed by the Data Sync tool, and loaded to BICS.

 

Further Reading

Taleo Product Documentation

Getting Started with Taleo Connect Client

Configuring the Data Sync Tool for BI Cloud Services

Using Oracle BI Answers to Extract Data from HCM via Web Services

$
0
0

Introduction

Oracle BI Answers, also known as ‘Analyses’ or ‘Analysis Editor’, is a reporting tool that is part of the Oracle Transactional Business Intelligence (OTBI), and available within the Oracle Human Capital Management (HCM) product suite.

This article will outline an approach in which a BI Answers report will be used to extract data from HCM via web services.  This provides an alternative method to the file based loader process (details of which can be found here)

This can be used for both Cloud and On-Premise versions of Oracle Fusion HCM.

Please note – the HCM team recommends the HCM extract process as the preferred approach to extract data from HCM.  That method is outlined here in this article.

Another recent method where the ‘Data Sync’ tool is used, which is similar to that described below, but helps to automate the process and adds true incremental load capability is covered in this article.

Main Article

During regular product updates to Oracle HCM, underlying data objects may be changed.  As part of the upgrade process, these changes will automatically be updated in the pre-packaged reports that come with Oracle HCM, and also in the OTBI ‘Subject Areas’ – a semantic layer used to aid report writers by removing the need to write SQL directly against the underlying database.

As a result it is highly recommended to use either a pre-packaged report, or to create a new report based on one of the many OTBI Subject Areas, to prevent extracts subsequently breaking due to the changing data structures.

Pre-Packaged Reports

Pre-packaged reports can be found by selecting ‘Catalog’, expanding ‘Shared Folders’ and looking in the ‘Human Capital Management’ sub-folder.  If a pre-packaged report is used, make a note of the full path of the report shown in the ‘Location’ box below.  This path, and the report name, will be required for the WSDL.

Windows7_x64

Ad-Hoc Reports

To create an Ad-Hoc report, a user login with the minimum of BI Author rights is required.

a. Select ‘New’ and then ‘Analysis’

Windows7_x64

b. Select the appropriate HCM Subject Area to create a report.

Windows7_x64

c. Expand the folders and drag the required elements into the report.

d. Save the report into a shared location.  In this example this is being called ‘Answers_demo_report’ and saved into this location.

/Shared Folders/Custom/demo

This path will be referenced later in the WSDL.

Edit_Post_‹_ATeam_Chronicles_—_WordPress

Building Web Service Request

To create and test the Web Service, this post will use the opensource tool SoapUI.  This is free and can be downloaded here:

https://www.soapui.org

Within SoapUI, create a new SOAP project.  For the Initial WSDL address, use the Cloud or On-Premise URL, appending  ‘/analytics-ws/saw.dll/wsdl/v7′

For example:

https://cloudlocation.oracle.com/analytics-ws/saw.dll/wsdl/v7

or

https://my-on-premise-server.com/analytics-ws/saw.dll/wsdl/v7

This will list the available WSDLs

 

Calling the BI Answers report is a 2 step process

1. Within SoapUI, expand out the ‘SAWSessionService’ and then ‘logon’.  Make a copy of the example ‘Request’ WSDL, then update it to add the username and password for a user with credentials to run the BI Answers report.

Run that WSDL and a sessionID is returned:

SoapUI_4_6_4

2. In SoapUI expand ‘XmlViewService’ / ‘executeXMLQuery’.  Make a copy of the example ‘Request’ WSDL.  Edit that, insert the BI Answers report name and path into the <v7:reportPath> variable, and the SessionID from the first step into the <v7:sessionID> variable.

Note that while in the GUI the top level in the path was called ‘Shared Reports’, in the WSDL that is replaced with ‘shared’.  The rest of the path will match the format from the GUI.

You will notice a number of other options available.  For this example we are going to ignore those.

You can then execute the web service request.  The report returns the data as an XML stream, which can then be parsed by your code.

3

Summary

This post demonstrated a simple method to leverage BI Answers and the underlying OTBI Subject Areas within Oracle HCM, to create and call a report via web service to extract data for a down stream process.

Cloud Security: Federated SSO for Fusion-based SaaS

$
0
0

Introduction

To get you easily started with Oracle Cloud offerings, they come with their own user management. You can create users, assign roles, change passwords, etc.

However, real world enterprises already have existing Identity Management solutions and want to avoid to maintain the same information in many places. To avoid duplicate identities and the related security risks, like out of sync passwords, outdated user information, or rogue user accounts of locked accounts, single sign-on solutions are mandatory.

This post explains how to setup Federated Single Sign-on with Oracle SaaS to enable users present in existing Identity Management solutions to work with the Oracle SaaS offerings without additional user setup. After a quick introduction to Federated Single Sign-on based on SAML, we explain the requirements and the setup of Oracle SaaS for Federated Single Sign-on.

Federated Single Sign-on

Federated Single Sign-on or Federated SSO based on SAML Web Browser Single Sign-on is a widely-used standard in many enterprises world-wide.

The SAML specification defines three roles: the principal (typically a user), the Identity Provider, and the Service Provider. The Identity Provider and the Service Provider form a Circle of Trust and work together to provide a seamless authentication experience for the principal.

SAML Login Flows

The most comomly used SAML login flows are Service Provider Initiated Login and Identity Provider Initiated Login as shown below.

Service Provider Initiated Login

The Service Provider Initiated Login is the most common login flow and will be used by users without explicitely starting it. Pointing the browser to an application page is usually all that is needed.

Here the principal requests a service from the Service Provider. The Service Provider requests and obtains an identity assertion from the Identity Provider and decides whether to grant access to the service.

SAML_SP_Initiated_Login_0

Identity Provider Initiated Login

SAML allows multiple Identity Provider configured for the same Service Provider. Deciding which of these Identity Providers is the right one for the principal is possible but not always easy to setup. The Identity Provider Initiated Login allows the principal to help here by picking the correct Identity Provider as a starting point. The Identity Provider creates the identity assertion and redirects to the Service Provider which is now able to decide whether to grant access to the service.

SAML_IdP_Initiated_Login_0

Oracle SaaS and Federated SSO

Here Oracle SaaS acts as the Service Provider and builds a Circle of Trust with a third-party, on-premise Identity Provider. This setup applies to all Fusion Applications based SaaS offerings (like Oracle Sales Cloud, Oracle HCM Cloud, or Oracle ERP Cloud) and looks like this.

SaaS_SP_OnPrem_IDP
The setup requires a joint effort of the customer and Oracle Cloud Support.

Scenario Components

The components of this scenario are:

  • Oracle SaaS Cloud (based on Fusion Applications, for example, Oracle Sales Cloud, Oracle HCM Cloud, Oracle ERP Cloud)
  • Any supported SAML 2.0 Identity Provider, for example:
    • Oracle Identity Federation 11g+
    • Oracle Access Management 11gR2 PS3+
    • AD FS 2.0+
    • Shibboleth 2.4+
    • Okta 6.0+
    • Ping Federate 6.0+
    • Ping One
    • etc.

The list of the supported SAML 2.0 Identity Providers for Oracle SaaS is updated regularly, and is available as part of the Fusion Applications Technology: Master Note on Fusion Federation (Support Doc ID 1484345.1).

Supported SAML 2.0 Identity Providers

The Setup Process

To setup this scenario Oracle Cloud Support and the Customer work together to create a operational scenario.

Setup of the On-Premise Identity Provider

To start the setup, the on-premise Identity Provider must be configured to fulfill these requirements:

  • It must implement SAML 2.0 of the Federation protocol.
  • The SAML 2.0 browser artifact SSO profile has been configured.
  • The SAML 2.0 Assertion NameID element must contain one of the following:
    • The user’s email address with the NameID Format being Email Address
    • The user’s Fusion uid with the NameID Format being Unspecified
  • All Federation Identity Provider endpoints must use SSL.
Setup of the Oracle SaaS Cloud Service Provider

Once the on-premise Identity Provider has been configured successfully, the following table outlines the process to request the setup of Oracle SaaS as Service Provider for Federated SSO with the customers on-premise Identity Provider:

Step Customer Oracle
1. File a Service Request to enable the required Oracle SaaS instance as Service Provider. The Service Request must follow the documented requirements.
(See Support Doc ID 1477245.1 or 2100578.1 for details.)
2. Approves the Service Request
3. Receives a document that describes how to configure the on-premise Identity Provider for the Service Provider.
4. When the conformance check has been done successfully, the Identity Provider Metadata File as XML file must be uploaded to the Service Request.
5. Configures the Service Provider in a non-production SaaS environment. When this is completed the Service Provider Metadata will be attached to the Service Request as an XML file for the customer. This file includes all the required information to add the Service Provider as a trusted partner to the Identity Provider.
6. Download the Service Provider metadata file and import it into the Identity Provider.
7. Adds the provided Identity Provider metadata to the Service Provider setup.
8. After the completion of the Service Provider setup, publishes a verification link in the Service Request.
9. Uses the verification link to test the features of Federated SSO.

Note: No other operations are allowed during this verification.

10. When the verification has been completed, update the SR to confirms the verification.
11. Finalize the configuration procedures.
12. Solely responsible for authenticating users.

When Federated SSO has been enabled, only those users whose identities have been synchronized between the on-premise Identity Provider and Oracle Cloud will be able to log in. To support this, Identity Synchronization must be configured (see below).

Identity Synchronization

Federated SSO only works correctly when users of the on-premise Identity Store and of the Oracle SaaS identity store are synchronized. The following sections outline the steps in general. The detailed steps will be covered in a later post.

Users are First Provisioned in Oracle SaaS

The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Setup Extraction Process
2. Download Data
3. Convert Data into Identity Store Format
4. Import Data into Identity Store

Users are First Provisioned in On-Premise Environment

It is very common that users are already exiting in on-premise environments. To allow these users to work with Oracle SaaS, they have to be synchronized into Oracle SaaS. The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Extract Data
2 Convert data into supported file format
3 Load user data using supported loading methods

References

Accessing Fusion Data from BI Reports using Java

$
0
0

Introduction

In a recent article by Richard Williams on A-Team Chronicles, Richard explained how you can execute a BI publisher report from a SOAP Service and retrieve the report, as XML, as part of the response of the SOAP call.  This blog article serves as a follow on blog article providing a tutorial style walk through on how to implement the above procedure in Java.

This article assumes you have already followed the steps in Richard’s blog article and created your report in BI Publisher, exposed it as a SOAP Service and tested this using SOAPUI, or another SOAP testing tool.

Following Richards guidance we know that he correct SOAP call could look like this

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
   <soap:Header/>
   <soap:Body>
      <pub:runReport>
         <pub:reportRequest>
            <pub:reportAbsolutePath>/~angelo.santagata@oracle.com/Bi report.xdo</pub:reportAbsolutePath>
            <pub:reportRawData xsi:nil="true" >true</pub:reportRawData>
            <pub:sizeOfDataChunkDownload>-1</pub:sizeOfDataChunkDownload>
            <pub:flattenXML>true</pub:flattenXML>
            <pub:byPassCache>true</pub:byPassCache>
         </pub:reportRequest>
         <pub:appParams/>
      </pub:runReport>
   </soap:Body>
</soap:Envelope>

Tip :One easy way to determine the reports location is to run the report and then examine the URL in the browser.

 

Implementing the SOAP call using JDeveloper 11g

We can now need to implement the Java SOAP Client to call our SOAP Service. For this blog we will use JDeveloper 11g, the IDE recommended for extending Oracle Fusion, however you are free to use your IDE of choice, e.g. NetBeans, Eclipse, VI, Notepad etc, the steps will obviously be different.

Creating the project

Within JDeveloper 11g start by creating a new Application and within this application create two generic projects. Call one project “BISOAPServiceProxy” and the other “FusionReportsIntegration”. The “BISOAPServiceProxy” project will contain a SOAP Proxy we are going to generate from JDeveloper 11g and the “FusionReportsIntegration” project will contain our custom client code. It is good practice to create separate projects so that the SOAP Proxies resides in its own separate project, this allows us to regenerate the proxy from scratch without affecting any other code.

Generating the SOAP Proxy

For this example we will be using the SOAP Proxy wizard as part of JDeveloper. This functionality generates a static proxy for us, which in turn makes it easier to generate the required SOAP call later.

  1. 1. With the BISOAPService project selected, start the JDeveloper SOAP Proxy wizard.
    File-> New-> Business Tier-> Web Services-> Web Service Proxy
  2. Proxy1
  3. 2. Click Next
  4. 3. Skipping the first welcome screen, in step 2 enter the JAX-WS Style as the type of SOAP Proxy you wish to generate in step 3 enter the WSDL of your Fusion Application BI Publisher webservice WSDL. It’s best to check this URL returns a WSDL document in your web browser before entering it here. The WSDL location will normally be something like : http://<your fusion Applications Server>/xmlpserver/services/ExternalReportWSSService?wsdl
  5. Proxy2
  6. It’s recommended that you leave the copy WSDL into project check-box selected.
  7. 4. Give a package name, unless you need to it’s recommended to leave the Root Package for generated types to be left blank
  8. proxy3
  9. 5. Now hit Finish

Fixing the project dependencies

We now need to make sure that the “FusionReportsIntegration” is able to see classes generated by the  “BISOAPServiceProxy” proxy. To resolve this in JDeveloper we simply need to setup a dependency between the two projects.

  1. 1. With the FusionReportsIntegration project selected, right-mouse click on the project and select “Project properties
  2. 2. In the properties panel select Dependencies
  3. 3. Select the little pencil icon and in the resulting dialog select “Build Output”. This selection tells JDeveloper that “this project depends on the successful build output” of the other project.
  4. 4. Save the Dialog
    dependancies1
  5. 5. Close [OK] the Project Properties dialog
  6. 6. Now is a good time to hit compile and make sure the SOAP proxy compiles without any errors, given we haven’t written any code yet it should compile just fine.

Writing the code to execute the SOAP call

With the SOAP Proxy generated, the project dependency setup, we’re now ready to write the code which will call the BI Server using the generated SOAP Proxy

  1. 1. With the Fusion Reports Integration selected , right mouse Click -> New -> Java -> Java Class
    javacode
  2. 2. Enter a name, and java package name, for your class
  3. 3. Ensure that “Main Method” is selected. This is so we can execute the code from the command line, you will want to change this depending on where you execute your code from, e.g. A library, a servlet etc.
  4. 4. Within the main method you will need to enter the following code snippet, once this code snippet is pasted you will need to correct and resolve imports for your project.
  5. 1.	ExternalReportWSSService_Service externalReportWSSService_Service;
    2.	// Initialise the SOAP Proxy generated by JDeveloper based on the following WSDL xmlpserver/services/ExternalReportWSSService?wsdl
    3.	externalReportWSSService_Service = new ExternalReportWSSService_Service();
    4.	// Set security Policies to reflect your fusion applications
    5.	SecurityPoliciesFeature securityFeatures = new SecurityPoliciesFeature(new String[]
    6.	{ "oracle/wss_username_token_over_ssl_client_policy" });
    7.	// Initialise the SOAP Endpoint
    8.	ExternalReportWSSService externalReportWSSService = externalReportWSSService_Service.getExternalReportWSSService(securityFeatures);
    9.	// Create a new binding, this example hardcodes the username/password, 
    10.	// the recommended approach is to store the username/password in a CSF keystore
    11.	WSBindingProvider wsbp = (WSBindingProvider)externalReportWSSService;
    12.	Map<String, Object> requestContext = wsbp.getRequestContext();
    13.	//Map to appropriate Fusion user ID, no need to provide password with SAML authentication
    14.	requestContext.put(WSBindingProvider.USERNAME_PROPERTY, "username");
    15.	requestContext.put(WSBindingProvider.PASSWORD_PROPERTY, "password");
    16.	requestContext.put(WSBindingProvider.ENDPOINT_ADDRESS_PROPERTY, "https://yourERPServer:443/xmlpserver/services/ExternalReportWSSService");
    
    17.	// Create a new ReportRequest object using the generated ObjectFactory
    18.	ObjectFactory of = new ObjectFactory();
    19.	ReportRequest reportRequest = of.createReportRequest();
    20.	// reportAbsolutePath contains the path+name of your report
    21.	reportRequest.setReportAbsolutePath("/~angelo.santagata@oracle.com/Bi report.xdo");
    22.	// We want raw data
    23.	reportRequest.setReportRawData("");
    24.	// Get all the data
    25.	reportRequest.setSizeOfDataChunkDownload(-1); 
    26.	// Flatten the XML response
    27.	reportRequest.setFlattenXML(true);
    28.	// ByPass the cache to ensure we get the latest data
    29.	reportRequest.setByPassCache(true);
    30.	// Run the report
    31.	ReportResponse reportResponse = externalReportWSSService.runReport(reportRequest, "");
    32.	// Display the output, note the response is an array of bytes, you can convert this to a String
    33.	// or you can use a DocumentBuilder to put the values into a XLM Document object for further processing
    34.	System.out.println("Content Type="+reportResponse.getReportContentType());
    35.	System.out.println("Data ");
    36.	System.out.println("-------------------------------");
    37.	String data=new String (reportResponse.getReportBytes());
    38.	System.out.println(data);
    39.	System.out.println("-------------------------------");
  6. Going through the code

  7.  
    Line What does it do
    1-3 This is the instantiation of a new class containing the WebService Proxy object. This was generated for us earlier
    5 Initialise a new instance of a security policy object, with the correct security policy, for your Oracle Fusion server . The most common security policy is that of “oracle/wss_username_token_over_ssl_client_policy”, however your server maybe setup differently
    8 Calls the factory method to initialise a SOAP endpoint with the correct security features set
    9-16 These lines setup the SOAP binding so that it knows which endpoint to execute (i.e. the Hostname+URI of your webservice which is not necessarily the endpoint where the SOAP Proxy was generated, the username and the password.In this example we are hard coding the details because we are going to be running this example on the command line. If this code is to be  executed on a JEE server, e.g. Weblogic, then we recommend this data is stored in the Credential store as CSF keys.
    17-19 Here we create a reportRequest object and populate it with the appropriate parameters for the SOAP call. Although not mandatory its recommended that you use the objectFactory generated by the SOAP proxy wizard in JDeveloper.
    21 This set the ReportPath parameter, including path to the report
    23 This line ensures we get the raw data without decoration, layouts etc.
    25 By default BI Publisher publishes data on a range basis, e.g. 50 rows at a time, for this usecase we want all the rows, and setting this to -1 will ensure this
    27 Tells the webservice to flatten out the XML which is produced
    29 This is an optional flag which instructs the BI Server to bypass the cache and go direct to the database
    30 This line executes the SOAP call , passing the “reportReport” object we previously populated as a parameter. The return value is a reportResponse object
    34-39 These lines print out the results from the BI Server. Of notable interest is the XML document is returned as a byte array. In this sample we simply print out the results to the output, however you would normally pass the resulting XML into Java routines to generate a XML Document.

 

 

Because we are running this code from the command line as a java client code we need to import the Fusion Apps Certificate into the Java Key Store. If you run the code from within JDeveloper then the java keystore is stored in <JDeveloperHome>\wlserver_10.3\server\lib\DemoTrust.jks

Importing certificates

 

  1. 1. Download the Fusion Applications SSL certificate, using a browser like internet explorer navigate to the SOAP WSDL URL
  2. 2. Mouse click on the security Icon which will bring you to the certificate details
  3. 3. View Certificate
    4. Export Certificate as a CER File
  4. 5. From the command line we now need to import the certificate into our DemoTrust.jks file using the following commandkeytool -import -alias fusionKey -file fusioncert.cer -keystore DemoIdentity.jks

jks

Now ready to run the code!

With the runReport.java file selected press the “Run” button, if all goes well then the code will execute and you should see the XML result of the BI Report displayed on the console.

 

Oracle Data Integrator (ODI) for HCM-Cloud: a Knowledge Module to Generate HCM Import Files

$
0
0

Introduction

For batch imports, Oracle Cloud’s Human Capital Management (HCM) uses a dedicated file format that contains both metadata and data. As far as the data is concerned, the complete hierarchy of parent and children records must be respected for the file content to be valid.

To load data into HCM with ODI, we are looking here into a new Integration Knowledge Module (KM). This KM allows us to leverage ODI to prepare the data and generate the import file. Then traditional Web Services connections can be leveraged to load the file into HCM.

Description of the Import File Format

HCM uses a structured file format that follows a very specific syntax so that complex objects can be loaded. The complete details of the syntax for the import file are beyond the scope of this article, we only provide an overview of the process here. For more specific instructions, please refer to Oracle Human Capital Management Cloud: Integrating with Oracle HCM Cloud.

The loader for HCM (HCL) uses the following syntax:

  • Comments are used to make the file easier to read by humans. All comment lines must start with the keyword COMMENT
  • Because the loader can be used to load all sorts of business objects, the file must describe the metadata of the objects being loaded. This includes the objects name along with their attributes. Metadata information must be prefixed by the keyword METADATA.
  • The data for the business objects can be inserted or merged. The recommended approach is to merge the incoming data: in this case data to be loaded is prefixed with the keyword MERGE, immediately followed by the name of the object to be loaded and the values for the different attributes.

The order in which the different elements are listed in the file is very important:

  • Metadata for an object must always be described before data is provided for that object;
  • Parent objects must always be described before their dependent records.

In the file example below we are using the Contact business object because it is relatively simple and makes for easier descriptions of the process. The Contact business object is made of multiple components: Contact, ContactName, ContactAddress, etc. Notice that in the example the Contact components are listed before the ContactName components, and that data entries are always placed after their respective metadata.

COMMENT ##############################################################
COMMENT HDL Sample files.
COMMENT ##############################################################
COMMENT Business Entity : Contact
COMMENT ##############################################################
METADATA|Contact|SourceSystemOwner|SourceSystemId|EffectiveStartDate|EffectiveEndDate|PersonNumber|StartDate
MERGE|Contact|ST1|ST1_PCT100|2015/09/01|4712/12/31|STUDENT1_CONTACT100|2015/09/01
MERGE|Contact|ST1|ST1_PCT101|2015/09/01|4712/12/31|ST1_CT101|2015/09/01
COMMENT ##############################################################
COMMENT Business Entity : ContactName
COMMENT ##############################################################
METADATA|ContactName|SourceSystemOwner|SourceSystemId|PersonId(SourceSystemId)|EffectiveStartDate|EffectiveEndDate|LegislationCode|NameType|FirstName|MiddleNames|LastName|Title
MERGE|ContactName|ST1|ST1_CNTNM100|ST1_PCT100|2015/09/01|4712/12/31|US|GLOBAL|Emergency||Contact|MR.
MERGE|ContactName|STUDENT1|ST1_CNTNM101|ST1_PCT101|2015/09/01|4712/12/31|US|GLOBAL|John||Doe|MR.

Figure 1: Sample import file for HCM

The name of the file is imposed by HCM (the file must have the name of the parent object that is loaded). Make sure to check with the HCM documentation for the the limits in size and number of records for the file that you are creating. We will also have to zip the file before uploading it to the Cloud.

Designing the Knowledge Module

Now that we know what needs to be generated, we can work on creating a new Knowledge Module to automate this operation for us. If you need more background on KMs, the ODI documentation has a great description available here.

With the new KM, we want to respect all the constraints imposed by the loader for the file format. We also want to simplify the creation of the file as much as possible.

Our reasoning was that if ODI is used to prepare the file, the environment would most likely be such that:

  • Data has to be aggregated, augmented from external sources or somehow processed before generating the file;
  • Some of the data is coming from a database, or a database is generally available.

We designed our solution by creating database tables that matched the components of the business object that can be found in the file. This gives us the ability to enforce referential integrity: once primary keys and foreign keys in place in the database, parent records are guaranteed to be available in the tables when we want to write a child record to the file. Our model is the following for the Contact business object:

Data Model for HCM load

Figure 2: Data structure created in the database to temporary store and organize data for the import file

We are respecting the exact syntax (case sensitive) for the table names and columns. This is important because we will use these metadata to generate the import file.

The metadata need to use the proper case in ODI – depending on your ODI configuration, this may result in mixed case or all uppercase table names in your database. Either case works for the KM.

At this point, all we need is for our KM to write to the file when data is written to the tables. If the target file does not exist, the KM creates it with the proper header. If it does exist, the KM appends the metadata and data for the current table to the end of the file. Because of the referential integrity constraints in the database, we have to load the parent tables first… this will guarantee that the records are added to the file in the appropriate order. All we have to do is to use this KM for all the target tables of our model, and to load the tables in the appropriate order.

For an easy implementation, we took the IKM Oracle Insert and modified it as follows:

  • We added two options: one to specify the path where the HCM import file must be generated, the other for the name of the file to generate;
  • We created a new task to write the content of the table to the file, once data has been committed to the table. This task is written in Groovy and shown below in figure 3:

import groovy.sql.Sql
File file = new File('<%=odiRef.getOption("HCM_IMPORT_FILE_FOLDER")%>/<%=odiRef.getOption("HCM_IMPORT_FILE_NAME")%>')
if (!file.exists()){
file.withWriterAppend{w->
w<<"""COMMENT ##################################################################

COMMENT File generated by ODI
COMMENT Based on HDL Desktop Integrator- Sample files.
"""
  }
}
file.withWriterAppend{w->
  w<<"""
COMMENT ##########################################################################
COMMENT Business Entity : <%=odiRef.getTargetTable("TABLE_NAME")%>
COMMENT ###########################################################################
"""
  }
file.withWriterAppend{w->
w<<"""METADATA|<%=odiRef.getTargetTable("TABLE_NAME")%>|<%=odiRef.getTargetColList("", "[COL_NAME]", "|", "")%>
""".replace('"', '')
  }
// Connect to the target database
def db = [url:'<%=odiRef.getInfo("DEST_JAVA_URL")%>', user:'<%=odiRef.getInfo("DEST_USER_NAME")%>', password:'<%=odiRef.getInfo("DEST_PASS")%>', driver:'<%=odiRef.getInfo("DEST_JAVA_DRIVER")%>']
def sql = Sql.newInstance(db.url, db.user, db.password, db.driver)
// Retrieve data from the target table and write the data to the file
sql.eachRow('select * from  <%=odiRef.getTable("L","TARG_NAME","D")%>') { row ->
     file.withWriterAppend{w->
w<<"""MERGE|<%=odiRef.getTargetTable("TABLE_NAME")%>|<%=odiRef.getColList("","${row.[COL_NAME]}", "|", "", "")%>
""".replace('null','')
  }
 }
sql.close()

Figure 3: Groovy code used in the KM to create the import file

If you are interested in this implementation, the KM is available here for download.

Now all we have to do is to use the KM in our mappings for all target tables.

HCM KM in Use

Figure 4: The KM used in a mapping

We can take advantage of the existing options in the KM to either create the target tables if they do not exist or truncate them if they already exist. This guarantees that we only add new data to the import file.

Testing the Knowledge Module

To validate that the KM is creating the file as expected, we have created a number of mappings that load the 6 tables of our data model. Because one of our source files contains data for more than just one target table, we create a single mapping to load the first three tables. In this mapping, we specify the order in which ODI must process these loads as shown in figure 5 below:

Ensure data load order

Figure 5: Ensuring load order for the target tables… and for the file construction.

The remaining tables load can be designed either in individual mappings or consolidated in a single mapping if the transformations are really basic.

We can then combine these mappings in a package that waits for incoming data (incoming files or changes propagated by GoldenGate). The Mappings process the data and create the import file. Once the file is created, we can zip it to make it ready for upload and import with web services, a subject that is discussed in Using Oracle Data Integrator (ODI) to Bulk Load Data into HCM-Cloud.

The complete package looks like this:

 

HCM Load package

Figure 6: Package to detect arriving data, process them with the new KM and generate an import file for HCM, compress the file and invoke the necessary web services to upload and import the file.

With this simple package, you can start bulk loading business objects into HCM-Cloud with ODI.

The web service to import data into HCM requires the use of OWSM security policies. To configure OWSM with ODI, please Connecting Oracle Data Integrator (ODI) to the Cloud: Web Services and Security Policies

Conclusion

With relatively simple modifications to an out-of-the-box ODI Knowledge Module, the most advanced features of ODI can now be leveraged to generate an import file for HCM and to automate the load of batch data into the cloud

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator.

Acknowledgements

Special thanks to Jack Desai and Richard Williams for their help and support with HCM and its load process.

References

Creating custom Fusion Applications User Interfaces using Oracle JET

$
0
0

Introduction

JET is Oracle’s new mobile toolkit specifically written for developers to help them build client slide applications using JavaScript. Oracle Fusion Applications implementers are often given the requirement to create mobile, or desktop browser, based custom screens for Fusion Applications. There are many options available to the developer for example Oracle ADF (Java Based) and Oracle JET (JavaScript based). This blog article gives the reader a tutorial style document on how to build a hybrid application using data from Oracle Fusion Sales Cloud. It is worth highlighting that although this tutorial is using Sales Cloud, the technique below is equally applicable to HCM cloud, or any other Oracle SaaS cloud product which exposes a REST API.

Main Article

Pre-Requisites

It is assumed that you’ve already read the getting started guide on the Oracle Jet website and installed all the pre-requisites. In addition if you are to create a mobile application then you will also need to install the mobile SDKs from either Apple (XCode) or Android (Android SDK).

 

You must have a Apple Mac to be able to install the Apple IOS developer kit (XCode), it is not possible to run XCode on a Windows PC

Dealing with SaaS Security

Before building the application itself we need to start executing the REST calls and getting our data and security is going to be the first hurdle we need to cross.Most Sales Cloud installations allow “Basic Authentication” to their APIs,  so in REST this involves creating a HTTP Header called “Authorization” with the value “Basic <your username:password>” , with the <username:password> section encoded as Base64. An alternative approach used when embedding the application within Oracle SaaS is to use a generated JWT token. This token is generated by Oracle SaaS using either groovy or expression language. When embedding the application in Oracle SaaS you have the option of passing parameters, the JWT token would be one of these parameters and can subsequently be used instead of the <username:password>. When using JWT token the Authorization string changes slightly so that instead of “Basic” it become “Bearer”,

 

Usage Header Name Header Value
Basic Authentication Authorization Basic <your username:password base64 encoded>
JWT Authentication Authorization Bearer <JWT Token>

 

Groovy Script in SalesCloud to generate a JWT Token

def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )
def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())
def url = thirdpartyapplicationurl +"?jwt ="+crmkey
return (url)

Expression Language in Fusion SaaS (HCM, Sales, ERP etc) to generate a JWT Token

#{EndPointProvider.externalEndpointByModuleShortName['My3rdPartApplication']}?jwt=#{applCoreSecuredToken.trustToken}

Getting the data out of Fusion Applications using the REST API

When retrieving  data from Sales Cloud we need to make sure we get the right data, not too much and not too little. Oracle Sales Cloud, like many other Oracle SaaS products, now supports the REST API for inbound and outbound data access. Oracle HCM also has a REST API but at the time of writing this article, the API is in controlled availability.

Looking at the documentation hosted at Oracle Help Center :http//docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ 

The REST call to get all Sales Cloud Opportunities looks like this :

https://yourCRMServer/salesApi/resources/latest/opportunities

If you executed the above REST call you will notice that the resulting payload is large, some would say huge. There are good reasons for this, namely that the Sales Cloud Opportunity object contains a large number fields, secondly the result not only contains data but also contains metadata and finally the request above is a select all query. The metadata includes links to child collections, links to List of Values, what tabs are visible in Sales Cloud , custom objects, flexfields etc. Additionally the query we just executed is a the equivalent of a select * from table, i.e. it brings back everything so we’ll also need to fix that.

 

Example snippet of a SalesCloud Opportunity REST Response showing custom fields,tabs visible, child collections etc

"Opportunity_NewQuote_14047462719341_Layout6": "https://mybigm.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=3000000xxx44105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "Opportunity_NewQuote_14047462719341_Layout6_Layout7": "https://mybigMmachine.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060xxxx5%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout7Expr": "false",
  "ExtnFuseOpportunityEditLayout6Expr": "false",
  "ExtnFuseOpportunityCreateLayout3Expr": "false",
  "Opportunity_NewQuote_14047462719341_Layout8": "https://mybigm-demo.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060744105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout8Expr": "false",
  "CreateProject_c": null,
  "Opportunity_DocumentsCloud_14399346021091": "https://mydoccloud.documents.us2.oraclecloud.com/documents/embed/link/LF6F00719BA6xxxxxx8FBEFEC24286/folder/FE3D00BBxxxxxxxxxxEC24286/lyt=grid",
  "Opportunity_DocsCloud_14552023624601": "https://mydocscserver.domain.com:7002/SalesCloudDocCloudServlet/doccloud?objectnumber=2169&objecttype=OPPORTUNITY&jwt=eyJhxxxxxy1pqzv2JK0DX-xxxvAn5r9aQixtpxhNBNG9AljMLfOsxlLiCgE5L0bAI",
  "links": [
    {
      "rel": "self",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item",
      "properties": {
        "changeIndicator": "ACED0005737200136A6176612E7574696C2E41727261794C6973747881D21D99C7619D03000149000473697A65787000000002770400000010737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B020000787200106A6176612E6C616E672E4F626A65637400000000000000000000007870000000017371007E00020000000178"
      }
    },
    {
      "rel": "canonical",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item"
    },
    {
      "rel": "lov",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169/lov/SalesStageLOV",
      "name": "SalesStageLOV",
      "kind": "collection"
    },

Thankfully we can tell the REST API that we :

  • Only want to see the data, achieved by adding onlyData=true parameter
  • Only want to see the following fields OpportunityNumber,Name,CustomerName (TargetPartyName), achieved by adding a fields=<fieldName,fieldname> parameter
  • Only want to see a max of 10 rows, achieved by adding the limit=<value> parameter
  • Only want to see open opportunities, achieved by adding the q= parameter with a query string, in our case StatusCode=OPEN

If we want to get the data in pages/blocks we can use the offset parameter. The offset parameter tells the REST service to get the data “from” this offset. Using offset and limit we can effectively page through the data returned by Oracle Fusion Applications REST Service.

Our final REST request URL would look like :

https://myCRMServeroracledemos.com/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&offset=0&limit=10

The Oracle Fusion Applications REST API is documented in the relevant Oracle Fusion Applications Documentation, e.g. for Sales Cloud, http://docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ but it is also worth noting that the Oracle Fusion Applications REST Services are simply an implementation of the Oracle ADF Business Components REST Services, these are very well documented here  https://docs.oracle.com/middleware/1221/adf/develop/GUID-8F85F6FA-1A13-4111-BBDB-1195445CB630.htm#ADFFD53992

Our final tuned JSON result from the REST service will look something like this (truncated) :

{
  "items": [
    {
      "Name": "Custom Sentinel Power Server @ Eagle",
      "OptyNumber": "147790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Eagle Software Inc",
      "Revenue": 104000
    },
    {
      "Name": "Ultra Servers @ Beutelschies & Company",
      "OptyNumber": "150790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Beutelschies & Company",
      "Revenue": 175000
    },
    {
      "Name": "Diablo Technologies 1012",
      "OptyNumber": "176800",
      "StatusCode": "OPEN",
      "TargetPartyName": "Diablo Technologies",
      "Revenue": 23650
    }
}

Creating the Hybrid Application

Now we have our datasource defined we can start to build the application. We want this application to be available on a mobile device and therefore we will create a “Mobile Hybrid” application using Oracle JET, using the –NavDrawer template.

yo oraclejet:hybrid OSCOptyList --template=navDrawer --platforms=android

Once the yeoman script has built your application, you can test the (basic) application using the following two commands.

grunt build --platform=android 
grunt serve --platform=android --web=true

The second grunt serve command has a web=true parameter at the end, this is telling the script that we’re going to be testing this in our browser and not on the device itself. When this is run you should see basic shell [empty] application in your browser window.

For

Building Our JavaScript UI

Now that we have our data source defined we can now get onto to task of building the JET User Interface. Previously you executed the yo oraclejet:hybrid command, this created you a hybrid application using a template. Opening the resulting project in an IDE, like NetBeans, we can see that the project template has created a collection of files and that one of them is “dashboard.html” (marked 1 in the image), edit this file using your editor.

dashboard.html

 

Within the file delete everything and replace it with this snippet of html code

<div class="oj-hybrid-padding">
    <div class="oj-flex">
        <div class="oj-flex-item">
            <button id= "prevButton" 
                    data-bind="click: previousPage, 
                       ojComponent: { component: 'ojButton', label: 'Previous' }">
            </button>
            <button id= "nextButton"
                    data-bind="click: nextPage, 
                       ojComponent: { component: 'ojButton', label: 'Next' }">
            </button>
        </div>
    </div>
    <div class="oj-flex-item">    
        <div class="oj-panel oj-panel-alt1 oj-margin">
            <table id="table" summary="Opportunity List" aria-label="Opportunity List"
                   data-bind="ojComponent: {component: 'ojTable', 
                                data: opportunityDataSource, 
                                columnsDefault: {sortable: 'none'}, 
                                columns: [{headerText: 'Opty Number', 
                                           field: 'OptyNumber'},
                                          {headerText: 'Name', 
                                           field: 'Name'},
                                          {headerText: 'Revenue', 
                                           field: 'Revenue'},
                                          {headerText: 'Customer Name', 
                                           field: 'TargetPartyName'},
                                          {headerText: 'Status Code', 
                                           field: 'StatusCode'}
           ]}">
            </table>
        </div>    
    </div>
</div>

The above piece of html adds a JET table to the page, for prettiness we’ve wrapped the table in a decorative panel and added a next and previous buttons. The table definition tells Oracle JET that the data is coming from a JavaScript object called “opportunityDataSource“, it also defines defines the columns, column header text and that the columns are not sortable. The button definitions reference two functions in our JavaScript (to follow) which will paginate the data.

Building The logic

We can now move onto the JavaScript side of things, that is the part where we get the data from Sales Cloud and makes it available to the table object in the html file. For this simplistic example we’ll get the data direct from Sales Cloud and display it in the table, with no caching and nothing fancy like collection models for pagination .

Edit the dashboard.js file, this is marked as 2 in the above image. This file is a RequiresJS AMD (Application Module Definition File) and is pre-populated to support the dashboard.html page.

Within this file, cut-n-paste the following JavaScript snippet.

define(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojtable', 'ojs/ojbutton'],
        function (oj, ko, $) {
            function DashboardViewModel() {
                var self = this;
                var offset = 0;
                var limit = 10;
                var pageSize = 10;
                var nextButtonActive = ko.observable(true);
                var prevButtonActive = ko.observable(true);
                //
                self.optyList = ko.observableArray([{Name: "Fetching data"}]);
                console.log('Data=' + self.optyList);
                self.opportunityDataSource = new oj.ArrayTableDataSource(self.optyList, {idAttribute: 'Name'});
                self.refresh = function () {
                    console.log("fetching data");
                    var hostname = "https://yourCRMServer.domain.com";
                    var queryString = "/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&limit=10&offset=" + offset;
                    console.log(queryString);
                    $.ajax(hostname + queryString,
                            {
                                method: "GET",
                                dataType: "json",
                                headers: {"Authorization": "Basic " + btoa("username:password")},
                                // Alternative Headers if using JWT Token
                                // headers : {"Authorization" : "Bearer "+ jwttoken; 
                                success: function (data)
                                {
                                    self.optyList(data.items);
                                    console.log('Data returned ' + JSON.stringify(data.items));
                                    console.log("Rows Returned"+self.optyList().length);
                                    // Enable / Disable the next/prev button based on results of query
                                    if (self.optyList().length < limit)
                                    {
                                        $('#nextButton').attr("disabled", true);
                                    } else
                                    {
                                        $('#nextButton').attr("disabled", false);
                                    }
                                    if (self.offset === 0)
                                        $('#prevButton').attr("disabled", true);
                                },
                                error: function (jqXHR, textStatus, errorThrown)
                                {
                                    console.log(textStatus, errorThrown);
                                }
                            }
                    );
                };
                // Handlers for buttons
                self.nextPage = function ()
                {

                    offset = offset + pageSize;
                    console.log("off set=" + offset);
                    self.refresh();
                };
                self.previousPage = function ()
                {
                    offset = offset - pageSize;
                    if (offset < 0)
                        offset = 0;
                    self.refresh();
                };
                // Initial Refresh
                self.refresh();
            }
            
            return new DashboardViewModel;
        }
);

Lets examine the code

Line 1: Here we’ve modified the standard define so that it includes a ojs/table reference. This is telling RequiresJS , which the JET toolkit uses, that this piece of JavaScript uses a JET Table object
Line 8 & 9 : These lines maintain variables to indicate if the button should be enabled or not
Line 11: Here we created a variable called optyList, this is importantly created as a knockout observableArray.
Line 13: Here we create another variable called “opportunityDataSource“, which is the variable the HTML page will reference. The main difference here is that this variable is of type oj.ArrayTableDataSource and that the primary key is OptyNumber
Lines 14-47 :  Here we define a function called “refresh”. When this javascript function is called we execute a REST Call back to SalesCloud using jquery’s ajax call. This call retrieves the data and then populates the optyList knockout data source with data from the REST call. Specifically here note that we don’t assign the results to the optyData variable directly but we purposely pass a child array called “items”. If you execute the REST call, we previously discussed, you’ll note that the data is actually stored in an array called items
Line 23 : This line is defining the headers, specifically in this case we’re defining a header called “Authorization” , with the username & password formatted as “username:password” and then the base64 encoded.
Line 24-25  :These lines define an alternative header which would be appropriate if a JWT token was being used. This token would be passed in as a parameter rather than being hardcoded
Lines 31-40 : These query the results of the query and determine if the next and previous buttons should be enabled or not using jQuery to toggle the disabled attribute
Lines 50-63 : These manage the next/previous button events
Finally on line 65 we execute the refresh() method when the module is initiated.

Running the example on your mobile

To run the example on your mobile device execute the follow commands

grunt build --platform=android 
grunt serve --platform=android

or if you want to test on a device

grunt serve --platform=android -destination=[device or emulator name]

If all is well you should see a table of data populated from Oracle Sales Cloud

 

For more information on building JavaScript applications with the Oracle JET tool make sure to check out our other blog articles on JET here , the Oracle JET Website here and the excellent Oracle JET You Tube channel here

Running the example on the browser and CORS

If you try and run the example on your browser you’ll find it probably won’twork. If you look at the browser console (control+shift+I on most browsers) you’ll probably see that the error was something like “XMLHttpRequest cannot load…” etc

cors

This is because the code has violated “Cross Origin Scripting” rules. In a nut shell “A JavaScript application cannot access a resource which was not served up by the server which itself was served up from”.. In my case the application was served up by Netbeans on http://localhost:8090, whereas the REST Service from Sales Cloud is on a different server, thankfully there is a solution called “CORS”. CORS stands for Cross Origin Resource Sharing and is a standard for solving this problem, for more information on CORS see this wikipedia article, or other articles on the internet.

Configuring CORS in Fusion Applications

For our application to work on a web browser we need to enable CORS in Fusion Applications, we do this by the following steps :

  1. 1. Log into Fusion Applications (SalesCloud, HCM etc) using a user who has access to “Setup and Maintenance”
  2. 2. Access setup and Maintenance screens
  3. 3. Search for Manage Administrator Profile Values and then navigate to that task
  4. 4. Search for “Allowed Domains” profile name (case sensitive!!).
  5. 5. Within this profile name you see a profile option called “site“, this profile option has a profile value
  6. 6. Within the profile value add the hostname, and port number, of the application hosting your JavaScript application. If you want to allow “ALL” domains set this value to “*” (a single asterisk )
  7. WARNING : Ensure you understand the security implication of allowing ALL Domains using the asterisk notation!
  8. 7. Save and Close and then retry running your JET Application in your browser.
setupandMaiteanceCORS

CORS Settings in Setup and Maintenance (Click to enlarge)

If all is good when you run the application on your browser, or mobile device, you’ll now see the application running correctly.

JETApplication

Running JET Application (Click to enlarge)

 

Final Note on Security

To keep this example simple the security username/password was hard-coded in the mobile application, not suitable for a real world application. For a real application you would create a configuration screen, or use system preferences, to collect and store the username , password and the SalesCloud Server url which would then be used in the application.

If the JET Application is to be embedded inside a Fusion Applications Page then you will want to use JWT Token authentication. Modify the example so that the JWT token is passed into the application URL as a parameter and then use that in the JavaScript (lines 24-25) accordingly.

For more information on JWT Tokens in Fusion Applications see these blog entries (Link 1, Link 2) and of course the documentation

Conclusion

As we’ve seen above its quite straightforward to create mobile, and browser, applications using the Oracle JET Framework. The above example was quite simple and only queried data, a real application would also have some write/delete/update operations and therefore you would want to start to look at the JET Common Model and Collection Framework (DocLink) instead. Additionally in the above example we queried data direct from a single SalesCloud instance and did no processing on it.. Its very likely that a single mobile application will need to get its data from multiple data sources and require some backend services to “preprocess” and probably post process the data, in essence provide an API.. We call this backend a  “MBaaS”, ie Mobile Backend As A Service, Oracle also provides a MBaaS in its PaaS suite of products and it is called Oracle Product is called “Mobile Cloud Service”..

In a future article we will explore how to use Oracle Mobile Cloud Service (Oracle MCS) to query SalesCloud and Service cloud and provide an API to the client which would be using the more advanced technique of using the JET Common Model/Collection framework.

 

 

Using Oracle Data Integrator (ODI) to Bulk Load Data into HCM-Cloud

$
0
0

Introduction

With its capacity to handle complex transformations and large volumes of data, and its ability to orchestrate operations across heterogeneous systems, ODI is a great tool to prepare and upload bulk data into HCM Cloud.

In this post, we are looking at the different steps required to perform this task.

Overview of the integration process

There are three steps that are required to prepare and upload data into HCM:

  • Transform data and prepare a file that matches the import format expected by HCM. Then ZIP the generated file;
  • Upload the file to UCM-Cloud using the appropriate web service call;
  • Invoke HCM-Cloud to trigger the import process, using the appropriate web service call.

We will now see how these different steps can be achieved with ODI.

Preparing the data for import

We will not go into the details of how to transform data with ODI here: this is a normal use of the product and as such it is fully documented.

For HCM to be able import the data, ODI needs to generate a file that has to be formatted according to HCM specifications. For ODI to generate the proper file, the most effective approach is to create a custom Knowledge Module (KM). The details for this Knowledge Module as well as an introduction to the HCM file format are available here: Oracle Data Integrator (ODI) for HCM-Cloud: a Knowledge Module to Generate HCM Import Files. Using this KM, data can be prepared from different sources, aggregated and augmented as needed. ODI will simply generate the import file as data is loaded into a set of tables that reflect the HCM file’s business objects components.

Once the file has been generated with regular ODI mappings, the ODI tool OdiZip can be used to compress the data. You need to create a package to define the sequence of mappings to transform the data and create the import file. Then add an OdiZip step in the package to compress the file.

ODIZip

the name of the import file is imposed by HCM, but the ZIP file can have any name, which can be very convenient if you want to generate unique file names.

Uploading the file to UCM-Cloud

The web service used to upload the file to UCM is relatively straightforward. The only element we have to be careful with is the need to timestamp the data by setting a start date and a nonce (unique ID) in the header of the SOAP message. We use ODI to generate these values dynamically by creating two variables: StartDate and Nonce.  Both variables are refreshed in the package.

The refresh code for the StartDate variable is the following:

select to_char(sysdate,’YYYY-MM-DD’) || ‘T’ || to_char(systimestamp,’HH:MI:SSTZH:TZM’)
from dual

This formats the date like this: 2016-05-15T04:38:59-04:00

The refresh code for the Nonce variable is the following:

select dbms_random.string(‘X’, 9) from dual

This gives us a 9 character random alphanumeric string, like this: 0O0Q3LRKM

We can then set the parameters for the UCM web service using these variables. When we add the OdiInvokeWebService tool to a package, we can take advantage of the HTTP Analyzer to get help with the parameters settings.

HTTP Analyzer

To use the HTTP Analyzer, we first need to provide the WSDL for the service we want to access. Then we click the HTTP Analyzer button: ODI will read the WSDL and build a representation of the service that lets us view and set all possible parameters.

If not obvious, for the Analyzer to work, you need to be able to connect to the WSDL.

The Analyzer lets us set the necessary parameters for the header, where we use the variables that we have previously defined:

UCM soap header

We can then set the rest of the parameters for the web service. To upload a file with UCM, we need the following settings:

IdcService: CHECKIN_UNIVERSAL (for more details on this and other available services, check out the Oracle Fusion Middleware Services Reference Guide for Oracle Universal Content Management)

FieldArray: we use the following fields:

Field name Field content Comment
 
dDocName Contact.zip The name of our file
dDocAuthor HCM_APP_ADMIN_ALL Our user name in UCM
dDocTitle Contact File for HCM Label for the file
dDocType Document
dSecurityGroup Public
doFileCopy TRUE Keep the file on disk after copy
dDocAccount ODIAccount

 

The screenshot below shows how these parameters can be entered into the Analyzer:

HTTP Analyzer IdcService

In the File Array we can set the name of the file and its actual location:

HTTP Analyzer - File and send

At this point we can test the web service with the Send Request button located at the bottom of the Analyzer window: you see the response from the server on the right-hand side of the window.

If you want to use this test feature, keep in mind that:
– Your ODI variables need to have been refreshed so that they have a value
– The ODI variables need to be refreshed between subsequent calls to the service: you cannot use the same values twice in a row for StartDate and Nonce (or the server would reject your request).

A few comments on the execution of the web service: a successful call to the web service does not guarantee that the operation is successful. You want to review the response returned by the service to validate the success of the operation. Make sure that you set the name of the Response File when you set the parameters for the OdiInvokeWebService tool to do this.

All we need to validate in this response file is the content of the element StatusMessage: if it contains ‘Successful’ then the file was loaded successfully. If it contains ‘not successful’ then you have a problem. It is possible to build an ODI mapping for this (creating a model for the XML file, reverse-engineering the file, then building the mapping…) but a very simple Groovy script (in an ODI procedure for instance) can get us there faster and can throw in the ODI Operator log the exact error message returned by the web service in case of problems:

import groovy.util.XmlSlurper

// You can replace the following hard-coded values with ODI variables. For instance:
// inputFile=#myProject.MyResponseFile
inputFile = ‘D/TEMP/HCM//UCMResponse.xml’
XMLTag=’StatusMessage’
fileContents = new File(inputFile).getText(‘UTF-8’)
def xmlFile=new XmlSlurper().parseText(fileContents)
def responseStatus= new String(xmlFile.’**’.find{node-> node.name() == XMLTag}*.text().toString())
if (responseStatus.contains(‘Successful’)) {
// some action}
else {
throw new Exception(responseStatus)
}

This said, if all parameters are set correctly and if you have the necessary privileges on UCM Cloud, at this point the file is loaded on UCM. We can now import the file into HCM Cloud.

Invoking the HCM-Cloud loader to trigger the import process

The HCM web service uses OWSM security policies. If you are not familiar with OWSM security policies, or if you do not know how to setup ODI to leverage OWSM, please refer to Connecting Oracle Data Integrator (ODI) to the Cloud: Web Services and Security Policies. This blog post also describes how to define a web service in ODI Topology.

Once we have the web service defined in ODI Topology, invoking the web service is trivial. When you set the parameters for the ODI tool OdiInvokeWebService in your package, you only need to select a Context as well as the logical schema that points to the web service. Then you can use the HTTP Analyzer to set the parameters for the web service call:

HCM web service call

In our tests we set the ContentId to the name of the file that we want to load, and the Parameters to the following values:

FileEncryption=NONE, DeleteSourceFile=N.

You can obviously change these values as you see fit. The details for the parameters for this web service are available in the document HCM Data Loader Preparation.

Once we have set the necessary parameters for the payload, we just have to set the remainder of parameters for OdiInvokeWebService. In particular, we need a response file to store the results from the invocation of the web service.

Here again we can use Groovy code to quickly parse the response file and make sure that the load started successfully (this time we are looking for an element named result in the response file).

Make sure that the user you are using to connect and initiate the import process has enough privileges to perform this operation. One easy way to validate this is with the HCM user interface: if you can import the files manually from the HCM user interface, then you have enough privileges to execute the import process with the web service.

The final ODI package looks like this:

HCM Load Final Package

This final web service call initiates the import of the file. You can make additional calls to check on the status of the import (running, completed, aborted) to make sure that the file is successfully imported. The process to invoke these additional web services is similar to what we have done here to import the file.

Conclusion

The features available in ODI 12.2.1 make it relatively easy to generate a file, compress it, upload it to the cloud and import it into HCM-Cloud: we have generated an import file in a proprietary format with a quick modification of a standard Knowledge Module; we have edited the header and the workload of web services without ever manipulating XML files directly; we have setup security policies quickly by leveraging the ability to define web services in ODI Topology. Now all we have to do is to design all the transformations that will be needed to generate the data for HCM-Cloud!

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator.

References


Cloud Security: Using Fusion Application Web Services with Message Protection

$
0
0

Introduction

Oracle Fusion Applications offers a number of WebServices to allow other applications to incorporate the Fusion Applications functionality. To prevent data leakage, these WebServices follow a common security pattern that requires access authentication and message protection using message signing and/or message encryption.

To use such a WebService, the WSDL of each service provides all the information that tells the WebService client what needs to be provided to call it successfully.

For Fusion Applications, nearly all WebServices use message protection, i.e., message signing and/or message encryption, to ensure that the message arrives as it has been sent by the client. Like many Oracle products, Fusion Applications implements this by using Oracle WebService Manager (OWSM). OWSM also publishes the WebService’s base64-encoded public key certificate in the WSDL.

This article explains how to get this public key certificate and its related signing root certificate and how to put in the correct keystore.

Background

WebServices are a well known technique used by many applications to expose APIs that allow to build bigger applications in service-oriented architectures (SOA). WebServices may receive or send sensitive information and should be secured to avoid non-authorized usage. To allow a WebService developer to work on the service implementation only, Oracle provides a security layer for WebServices called Oracle Web Service Manager (OWSM). This layer allows to configure security measures, profiles in OWSM parlance, by service administrators at runtime. The WebService security profile will be implemented by OWSM adapters that intercept the WebService incoming or outgoing traffic and free the developer from implementing all possible security measures and allow them fully focus on the service implementation only.

Certificates, Certificates, Certificates

Well secured WebServices require a number of certificates for proper message protection and secured transport. For Fusion Application Web Services these certificate types are very common:

  • Transport Level Security Certificate – The certificate used for securing the transport level (i.e., HTTPS). It can be easily retrieved from the browser session when inspecting the WSDL file. It is usually stored in the JDK truststore.
  • The Owner Certificate – This certificate is part of the WSDL, and used for message signing and/or encryption and may be installed in the client keystore.
  • The Issuer Certificate – (Optional) The Issuer Certificate name is part of the WSDL description. In later versions of OWSM the Issue Certificate is included in the WSDL, too.

Finding the OWSM Security Policy

A WebService WSDL protected by OWSM lists the security policies used for the WebService. The implementer of a client for the WebService can easily spot the security related contents.

The <wsp:Policy> tags may include OWSM policies, if the wsu:Id attribute specifies OWSM policy names like these:

<wsp:Policy wsu:Id="wss11_saml_or_username_token_with_message_protection_service_policy">
<wsp:Policy wsu:Id="wss11_saml_token_with_message_protection_client_policy">

If the <wsp:Policy wsu:Id> attribute includes the text message_protection, the related information, i.e., the certificates, for the message protection policies must be found.

Message protection uses X.509 certificates for a public key which can be used to encrypt and/or sign the SOAP message (see X.509 for a detailed description). An OWSM protected WebService may include one or more X.509 certificates, which can be found within the <wsdl:service> tag.

<wsdl:service name="FinancialUtilService">

The <wsdl:service> tag has a few subtags. The most interesting is the <wsid:Identity> tag.

<wsid:Identity>
 <dsig:KeyInfo>
  <dsig:X509Data>
   <dsig:X509Certificate>MIICUDCCAbmgAwIBAgIIcIrTEM228yQwDQYJKoZIhvcNAQEFBQAw
    VzETMBEGCgmSJomT8ixkARkWA2NvbTEWMBQGCgmSJomT8ixkARkWBm9yYWNsZTEVMBMGCgmSJomT8ix
    ...
    uJZwkAwdUZXpk7GfIo136l6wQDtmCl/k=</dsig:X509Certificate>
   <dsig:X509IssuerSerial>
    <dsig:X509IssuerName>CN=Cloud9CA, DC=cloud, DC=oracle, DC=com</dsig:X509IssuerName>
    <dsig:X509SerialNumber>8109526148158255908</dsig:X509SerialNumber>
   </dsig:X509IssuerSerial>
   <dsig:X509SubjectName>CN=FAEncryption, DC=cloud, DC=oracle, DC=com</dsig:X509SubjectName>
   <dsig:X509SKI>epsQzG3qkIZbd7Ia5NzRiQDfb3g=</dsig:X509SKI>
  </dsig:X509Data>
 </dsig:KeyInfo>
</wsid:Identity>

The important tags here are <dsig:X509Certificate>, <dsig:X509IssuerName>, <dsig:X509SubjectName>. The tag <dsig:X509Certificate> holds the actual certificate required for message protection. The tag <dsig:X509SubjectName> specifies the name of the certificate. And finally, the tag <dsig:X509IssuerName> specifies the name of the certificate that was used for signing the certificate in <dsig:X509Certificate>. This is usually the name of the root certificate.

If the <dsig:X509SubjectName> and the <dsig:X509IssuerName> match, the <dsig:X509Certificate> is a self-signed certificate and only this certificate is needed.

Extract the Certificate

Once the certificate has been located, it needs to be extracted and stored into a Java keystore (jks file). To do this, the value between <dsig:X509Certificate> and </dsig:X509Certificate> needs to be selected and copied into an editor. Before saving the content into a file, it must have a line -----BEGIN CERTIFICATE----- before the certificate and a line -----END CERTIFICATE----- after it. It should look like this:

-----BEGIN CERTIFICATE-----
MIICUDCCAbmgAwIBAgIIcIrTEM228yQwDQYJKoZIhvcNAQEFBQAwVzETMBEGCgmSJomT8ixkARkWA2Nv
bTEWMBQGCgmSJomT8ixkARkWBm9yYWNsZTEVMBMGCgmSJomT8ixkARkWBWNsb3VkMREwDwYDVQQDEwhD
...
OfGDtW/MLQpL2i8dL+SgEmjGUGtZuqEojTRE1IB/G+UuJZwkAwdUZXpk7GfIo136l6wQDtmCl/k=
-----END CERTIFICATE-----

When done, this should be stored in a file (for example owner_cert.cer).

Setting up the Keystore

Next, this certificate should be imported into a JKS keystore file. Although, there are many tools for doing this, the standard JDK keystore tool is a reliable tool for this task. To import the certificate, use the command keytool -importcert.

Note: Searching the Internet for a certificate import procedure often shows a two step process. However, even if there is no keystore file available, the keytool -importcert command creates a keystore with just the new certificate.
$ keytool -importcert -trustcacerts -alias orakey -keystore client.jks -file owner_cert.cer
Enter keystore password:
Re-enter new password:
Owner: CN=service, DC=us, DC=oracle, DC=com
Issuer: CN=CertGenCA, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US
Serial number: 15633202a8b
Valid from: Thu Jul 28 22:09:21 CEST 2016 until: Tue Jul 27 22:09:21 CEST 2021
Certificate fingerprints:
         MD5:  B3:58:A8:61:A1:97:A2:DB:A6:5F:B3:EB:36:41:87:73
         SHA1: 9A:95:96:23:60:06:55:30:17:58:51:75:AF:2D:A4:A0:AF:65:1F:B9
         SHA256: EC:48:17:95:E6:6C:3A:7D:29:22:3E:21:9A:60:43:06:F5:57:DF:A6:E8:0B:FD:B9:4B:07:8B:E6:6A:73:35:FE
         Signature algorithm name: SHA256withRSA
         Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: A8 67 A3 DA E8 52 2E D6   0D 07 93 83 96 3F 9E 09  .g...R.......?..
0010: EF E8 2F 56                                        ../V
]
]

Trust this certificate? [no]:  yes
Certificate was added to keystore
$

The values of Owner and Issuer match their respective values of the <dsig:X509SubjectName> and <dsig:X509IssuerName> tags.

Owner vs Issuer

Owner and Issuer are the values that help to identify the certificate. Both values are the parties who created the certificate. Both values can match or be distinct. If they match, we have a root or self-signed certificate.

Root certificates are issued by an Certificate Authority (CA). These play a role similar to government authorities that issue ID cards and passports. They can be trusted. On the other hand, a self-signed certificate can be issued by everyone and may not be trusted.

If the imported certificate have distinct values for Owner and Issuer, this surely means that at least two different certificates are required:

  • the certificate of the Owner
  • the certificate of the Issuer

Normally, OWSM includes the Owner certificate in the WSDL file.

Getting the Issuer Certificate

If the Owner certificate is not a self-signed certificate there are two options to get the Issuer certificate:

Getting the Issuer Certificate From the WSDL

In later versions of OWSM, even the Issuer certificate can be included in the WSDL, too. Here is how the tag of such a WSDL looks like.

<wsid:Identity>
 <dsig:KeyInfo>
  <dsig:X509Data>
   <dsig:X509Certificate>
    MIIDbDCCAlSgAwIBAgIGAVYzICqLMA0GCSqGSIb3DQEBCwUAMHgxCzAJBgNVBAYTAlVTMRAwDgY
    DVQQIEwdNeVN0YXRlMQ8wDQYDVQQHEwZNeVRvd24xFzAVBgNVBAoTDk15T3JnYW5pemF0aW9uMR
    ...
    yHnI/gfr19XWPAtSWVr0XqkTKmBtdtw4AwmEZB5bF08PIh+Ew==</dsig:X509Certificate>
   <dsig:X509IssuerSerial>
    <dsig:X509IssuerName>CN=CertGenCA, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US</dsig:X509IssuerName>
    <dsig:X509SerialNumber>1469736561291</dsig:X509SerialNumber>
   </dsig:X509IssuerSerial>
   <dsig:X509SubjectName>CN=service, DC=us, DC=oracle, DC=com</dsig:X509SubjectName>
   <dsig:X509SKI>qGej2uhSLtYNB5ODlj+eCe/oL1Y=</dsig:X509SKI>
   <dsig:X509Certificate>
    MIIDvzCCAqegAwIBAgIQQARIhsRB7ztkOoBmQJr8oDANBgkqhkiG9w0BAQsFADB4MQswCQYDVQQ
    GEwJVUzEQMA4GA1UECAwHTXlTdGF0ZTEPMA0GA1UEBwwGTXlUb3duMRcwFQYDVQQKDA5NeU9yZ2
    ...
    4OTPTZgMX</dsig:X509Certificate>
  </dsig:X509Data>
 </dsig:KeyInfo>
</wsid:Identity>

The second <dsig:X509Certificate> tag may hold the certificate of the Issuer. This certificate should be copied into a file as described above. To import the Issuer certificate see the import command in the Developer Tasks below.

Getting the Issuer Certificate From the Administrator

The Administrator can be any person who manages a Fusion Applications environment on-premises or on Oracle Cloud. The Administrator steps for both installation options are the same. The steps for the WebService client developer are similar but involve different routes:

  • Cloud – File a Service Request on My Oracle Support
  • On-premises – File a Service Request internally

The steps for the Administrator are as follows:

  • Open the Domain FMW Console
  • Navigate to WebLogic Domain > DomainName > Security > Keystore > system >
  • Select castore
  • Click on Manage
  • Find the certificate which has the matching Issuer name (in column Subject Name)
  • Select the certificate
  • Click on Export
  • Click on Export Certificate
  • Send the file to the WebService client developer

Developer Tasks

A developer needs to do these steps:

  • Ask the Administrator to get the certificate whose Subject Name matches the value of the Issuer
  • When the Issuer certificate has been received from the Administrator, these steps are needed:
    • Import the Issuer certificate into the client keystore:
      $ keytool -importcert -trustcacerts -alias democa -keystore client.jks -file issuer_cert.cer
      Enter keystore password:
      <b>Owner: CN=CertGenCA, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US</b>
      <b>Issuer: CN=CertGenCA, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US</b>
      Serial number: 40044886c441ef3b643a8066409afca0
      Valid from: Sat Dec 01 04:07:51 CET 2012 until: Thu Dec 02 04:07:51 CET 2032
      Certificate fingerprints:
               MD5:  F2:33:C1:AF:A6:95:8B:A3:5C:CE:DF:0D:16:05:07:AD
               SHA1: CA:61:71:5B:64:6B:02:63:C6:FB:83:B1:71:F0:99:D3:54:6A:F7:C8
               SHA256: 57:10:7C:2C:B3:07:B9:8B:F8:FD:EB:69:99:36:53:03:7A:E1:E7:CB:D3:7A:E7:CF:30:F3:B3:ED:F3:42:0A:D7
               Signature algorithm name: SHA256withRSA
               Version: 3
      
      Extensions:
      
      #1: ObjectId: 2.5.29.19 Criticality=true
      BasicConstraints:[
        CA:true
        PathLen:1
      ]
      
      #2: ObjectId: 2.5.29.15 Criticality=true
      KeyUsage [
        Key_CertSign
      ]
      
      #3: ObjectId: 2.5.29.14 Criticality=false
      SubjectKeyIdentifier [
      KeyIdentifier [
      0000: 34 38 FD 45 D8 80 CF C7   D2 E8 DF 1D F8 A1 39 B0  48.E..........9.
      0010: 11 88 00 6A                                        ...j
      ]
      ]
      
      Trust this certificate? [no]:  yes
      Certificate was added to keystore
      $
      

Using the Certificate

Finally, when every certificate is in place, the WebService client code can be written in any programming language but should use the certificates stored in the client keystore when calling the WebService.

Java code for OWSM Client

One of the best ways to implement a Java Client is to use the JDeveloper WebService Proxy generator. The code uses the OWSM client libraries and frees the developer from coding the details for the security policies.

Once the code has been created, the following code lines help to get started. (Your mileage for the authentication may vary. For simplicity Username/Password have been used.)

// (Optional) If the SSL certificate is not present in the standard truststore
// we may override the it with these lines:
if (overrideTruststore) {
  System.setProperty("javax.net.ssl.trustStore", trustStore);
  System.setProperty("javax.net.ssl.trustStorePassword", trustStorePassword);
}
// ...
WSBindingProvider wsbp = (WSBindingProvider)service;
Map<String, Object> requestContext = wsbp.getRequestContext();
requestContext.put(BindingProvider.USERNAME_PROPERTY, username);
requestContext.put(BindingProvider.PASSWORD_PROPERTY, password);
requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endPointURL);
requestContext.put(ClientConstants.WSSEC_KEYSTORE_TYPE, &quot;JKS&quot;);
requestContext.put(ClientConstants.WSSEC_KEYSTORE_LOCATION, clientJKS);
requestContext.put(ClientConstants.WSSEC_KEYSTORE_PASSWORD, clientJKSPassword);
Note: If the OWSM client library finds an Owner certificate in the WSDL, it may override the certificate stored in the WSSEC_KEYSTORE_LOCATION. This is a pretty useful feature and frees the client developer from importing the Owner certificate. However, if the keystore will be shared between different WebService client implementations it must contain both certificates. In any case, the Issuer certificate must be present in the WSSEC_KEYSTORE_LOCATION.

References

Cloud Security: Seamless Federated SSO for PaaS and Fusion-based SaaS

$
0
0

Introduction

Oracle Fusion-based SaaS Cloud environments can be extended in many ways. While customization is the standard activity to setup a SaaS environment for your business needs, chances are that you want to extend your SaaS for more sophisticated use cases.

In general this is not a problem and Oracle Cloud offers a great number of possible PaaS components for this. However, user and login experience can be a challenge. Luckily, many Oracle Cloud PaaS offerings use a shared identity management environment to make the integration easier.

This article describes how the integration between Fusion-based SaaS and PaaS works in general and how easy the configuration can be done.

Background

At the moment, Oracle Fusion-based SaaS comes with its own identity management stack. This stack can be shared between Fusion-based SaaS offerings like Global Human Capital Management, Sales Cloud, Financials Cloud, etc.

On the other hand, many Oracle PaaS offerings use a shared identity management (SIM-protected PaaS) and can share it if they are located in the same data center and identity domain. If done right, integration of SIM-protected PaaS and Fusion-based SaaS for Federated SSO can be done quite easily.

Identity Domain vs Identity Management Stack

In Oracle Cloud environments the term identity is used for two different parts and can be quite confusing.

  • Identity Domain – Oracle Cloud environments are part of an Identity Domain that governs service administration, for example, start and restart of instances, user management, etc. The user management always applies to the service administration UI but may not apply to the managed environments.
  • Identity Management Stack – Fusion-based SaaS has its own Identity Management Stack (or IDM Stack) and is also part of an Identity Domain (for managing the service).

Federated Single Sign-On

As described in Cloud Security: Federated SSO for Fusion-based SaaS, Federated Single Sign-on is the major user authentication solution for Cloud components.

Among its advantages are a single source for user management, single location of authentication data and a chance for better data security compared to multiple and distinct silo-ed solutions.

Components

In general, we have two component groups we want to integrate:

  • Fusion-based SaaS Components – HCM Cloud, Sales Cloud, ERP Cloud, CRM Cloud, etc.
  • SIM-protected PaaS Components – Developer Cloud Service, Integration Cloud Service, Messaging Cloud Service, Process Cloud Service, etc.

Each component group should share the Identity Domain. For seamless integration both groups should be in the same Identity Domain.

Integration Scenarios

The integration between both component groups follows two patterns. The first pattern shows the integration of both component groups in general. The second pattern is an extension of the first, but allows the usage of a third-party Identity Provider solution. The inner workings for both patterns are the same.

Federated Single Sign-On

This scenario can be seen as a “standalone” or self-contained scenario. All users are maintained in the Fusion-based IDM stack and synchronized with the shared identity management stack. The SIM stack acts as the Federated SSO Service Provider and the Fusion IDM stack acts as the Identity Provider. Login of all users and for all components is handled by the Fusion IDM stack.

SaaS-SIM-1

Federated Single Sign-On with Third Party Identity Provider

If an existing third-party Identity Provider should be used, the above scenario can be extended as depicted below. The Fusion IDM stack will act as a Federation Proxy and redirect all authentication requests to the third-party Identity Provider.

SaaS-SIM-IdP-2

User and Role Synchronization

User and Role synchronization is the most challenging part of Federated SSO in the Cloud. Although a manageable part, it can be really challenging if the number of identity silos is too high. The lower the number of identity silos the better.

User and Role Synchronization between Fusion-based SaaS and SIM-protected PaaS is expected to be available in the near future.

Requirements and Setup

To get the seamless Federated SSO integration between SIM-protected PaaS and Fusion-based SaaS these requirements have to be fulfilled:

  • All Fusion-based SaaS offerings should be in the same Identity Domain and environment (i.e., sharing the same IDM stack)
  • All SIM-based PaaS offerings should be in the same Identity Domain and data center
  • Fusion-based SaaS and SIM-based PaaS should be in the same Identity Domain and data center

After all, these are just a few manageable requirements which must be mentioned during the ordering process. Once this is done, the integration between Fusion-based SaaS and SIM-protected PaaS will be done automatically.

Integration of a third-party Identity Provider is still an on-request, Service Request based task (see Cloud Security: Federated SSO for Fusion-based SaaS). When requesting this integration adding Federation SSO Proxy setup explicitly to the request is strongly recommended!

Note: The seamless Federated SSO integration is a packaged deal and comes with a WebService level integration setting up the Identity Provider as the trusted SAML issuer, too. You can’t get the one without the other.

References

Integrating with Taleo Enterprise Edition using Integration Cloud Service (ICS)

$
0
0

Introduction

Oracle Taleo provides talent management functions as Software as a service (SaaS). Taleo often needs to be integrated with other human resource systems. In this post, let’s look at few integration patterns for Taleo and implementing a recommended pattern using Integration Cloud Service (ICS), a cloud-based integration platform (iPaaS).

Main Article

Oracle Taleo is offered in Enterprise and Business editions.  Both are SaaS applications that often need to be integrated with other enterprise systems, on-premise or on the cloud. Here are the integration capabilities of Taleo editions:

  • Taleo Business Edition offers integration via SOAP and REST interfaces.
  • Taleo Enterprise Edition offers integration via SOAP services, SOAP-based Bulk API and Taleo Connect Client (TCC) that leverages the Bulk API.

Integrating with Taleo Business Edition can be achieved with SOAP or REST adapters in ICS, using a simple “Basic Map Data” pattern. Integrating with Taleo Enterprise Edition, however, deserves a closer look and consideration of alterative patterns. Taleo Enterprise provides three ways to integrate, each with its own merits.

Integration using Taleo Connect Client(TCC) is recommended. We’ll also address other 2 approaches for sake of completeness. To jump to a sub-section directly, click one of the links below.


Taleo SOAP web services
Taleo Bulk API
Taleo Connect Client (TCC)
Integrating Taleo with EBS using ICS and TCC
Launching TCC client through a SOAP interface


Taleo SOAP web services

Taleo SOAP web services provide synchronous integration. Web services update the system immediately. However, there are restrictive metered-limits to number of invocations and number of records per invocation, in order to minimize impact to live application. These limits might necessitate several web service invocations to finish a job that might need only one execution of other job alternatives.  Figure 1 shows a logical view of such integration using ICS.

Figure1

Figure1

ICS integration could be implemented using “Basic Map Data” for each distinct flow or using “Orchestration” for more complex use cases.


Taleo Bulk API

Bulk APIs asynchronously exchange data with Taleo Enterprise Edition. Bulk APIs are SOAP-based and require submission of a job, subsequent polling to observe status of the jobs and optionally an invocation to fetch the data, for read operations.  Bulk APIs are less restrictive than SOAP APIs in terms of volume of records exchanged.

Bulk APIs invocations cloud include T-XML queries, CSV or XML content. T-XML queries could be easily generated from Taleo Connect Client (TCC)’s editor.  Figure 2 shows a logic view of integration using Bulk API.

Figure2

Figure 2

As seen above, the integration logic is complex, with multiple calls to complete the integration and one or more polling calls to find status of the request. In addition, need for TCC to author T-XML import/export queries might mean that any change in the query will require TCC to author the query and a redeployment of integration with modified T-XML to ICS. In addition, Bulk API requests that exceed certain size limit require the data MTOM attachments. A link to Bulk API guide is provided in References section.


Taleo Connect Client (TCC)

As stated previously, TCC provides the best way to integrate with Taleo Enterprise. TCC has design editor to author exports and imports and run configurations. It also could be run from command line to execute the import or export jobs. TCC leverages the Bulk API to execute the imports and exports, while abstracting the complex steps involved in using Bulk API directly. A link to another post introducing TCC is provided in References section.

Figure3

Figure 3

Figure 3 shows a logical view of a solution using TCC and ICS. In this case, ICS orchestrates the flow by interacting with HCM and Taleo.   TCC is launched remotely through SOAP service. TCC, the SOAP launcher service and a staging file system are deployed to an IaaS compute node running Linux.


Integrating Taleo with EBS using ICS and TCC

Let’s look at a solution to integrate Taleo and EBS Human resources module, using ICS as the central point for scheduling and orchestration. This solution is suitable for on-going scheduled updates involving few hundred records for each run. Figure 4 represents the solution.

Figure4

Figure 4

TCC is deployed to a host accessible from ICS. The same host runs a J-EE container, such as WebLogic or Tomcat. The launcher web service deployed to the container launches TCC client upon a request from ICS. TCC client, depending on the type of job, either writes a file to a staging folder or reads a file from the folder.  The staging folder could be local or on a shared file system, accessible to ICS via SFTP.  Here are the steps performed by the ICS orchestration.

  • Invoke launcher service to run a TCC export configuration. Wait for completion of the export.
  • Initiate SFTP connection to retrieve the export file.
  • Loop through contents of the file. For each row, transform the data and invoke EBS REST adapter to add the record. Stage the response from EBS locally.
  • Write the staged responses from EBS to a file and transfer via SFTP to folder accessible to TCC.
  • Invoke launcher to run a TCC import configuration. Wait for completion of the import.
  • At this point, bi-direction integration between Taleo and EBS is complete.

This solution demonstrates the capabilities of ICS to seamlessly integrate SaaS applications and on-premise systems. ICS triggers the job and orchestrates export and import activities in single flow. When the orchestration completes, both, Taleo and EBS are updated. Without ICS, the solution would contain a disjointed set of jobs that could be managed by different teams and might require lengthy triage to resolve issues.


Launching TCC client through a SOAP interface

Taleo Connect Client could be run from command line to execute a configuration to export or import data. A Cron job or Enterprise Scheduling service (ESS) could launch the client. However, enabling the client to be launched through a service will allow a more cohesive flow in integration tier and eliminate redundant scheduled jobs.

Here is a sample java code to launch a command line program. This code launches TCC code and wait for completion, capturing the command output. Note that the code should be tailored to specific needs and suitable error handing, and, tested for function and performance.

package com.test.demo;
import com.taleo.integration.client.Client;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class tccClient {
    public boolean runTCCJoB(String strJobLocation) {
        Process p=null;
        try {
            System.out.println("Launching Taleo client. Path:" + strJobLocation);
            String cmd = "/home/runuser/tcc/scripts/client.sh " + strJobLocation;
            p = Runtime.getRuntime().exec(cmd);
	//Read both Input and Error streams.
            ReadStream s1 = new ReadStream("stdin", p.getInputStream());
            ReadStream s2 = new ReadStream("stderr", p.getErrorStream());
            s1.start();
            s2.start();
            p.waitFor();
            return true;
        } catch (Exception e) {
            //log and notify as appropriate
            e.printStackTrace();
            return false;
        } finally {
            if (p != null) {
                p.destroy();
            }
        }
    }
}

Here is a sample service for a launcher service using JAX-WS and SOAP.

package com.oracle.demo;
import javax.jws.WebService;
import javax.jws.WebMethod;
import javax.jws.WebParam;

@WebService(serviceName = "tccJobService")
public class tccJobService {

    @WebMethod(operationName = "runTCCJob")
    public String runTCCJob(@WebParam(name = "JobPath") String JobPath) {
        try{
        //tccClient().runTCCJob(JobPath);
        return new tccClient().runTCCJoB(JobPath) ;
        }
        catch(Exception ex)
        {
            ex.printStackTrace();
            return ex.getMessage();
        }
    }
}

Finally, this is a SOAP request that could be sent from an ICS orchestration, to launch TCC client.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:demo="http://demo.oracle.com/">
   <soapenv:Header/>
   <soapenv:Body>
      <demo:runTCCJob>
         <!--Optional:-->
         <JobPath>/home/runuser/tcc/exportdef/TCC-Candidate-export_cfg.xml</JobPath>
      </demo:runTCCJob>
   </soapenv:Body>
</soapenv:Envelope>

Summary

This post addressed alternative patterns to integrate with Taleo Enterprise Edition, along with pros and cons of each pattern. It explained a demo solution based on the recommended pattern using TCC and provided code snippets and steps to launch TCC client via web service. At the time of this post’s publication, ICS does not offer Taleo-specific adapter. A link to current list of supported adapters is provided in references section.

 

References

·        Getting started with Taleo Connect Client (TCC) – ATeam Chronicles

·        Taleo business edition REST API guide

·        Taleo Enterprise Edition Bulk API guide

·        Latest documentation for Integration Cloud Service

·        Currently available ICS adapters

Techniques used to build PaaS4SaaS Mashup using Oracle Application Builder Cloud Service

$
0
0
Introduction Oracle ABCS is Oracles new citizen developer service allowing citizen developers the ability to quickly build applications for the web/mobile web. A common request from SaaS customers is the desire to have both standalone and embeded custom screens for their SaaS solutions. These obviously need to be feature rich, sexy UIs and above all […]

BI Cloud Connector – Download Data Extraction Files

$
0
0
Introduction The Oracle Fusion based SaaS offerings provide an interesting tool to extract data and to store them in CSV format on a shared resource like the built-in UCM server or a Storage Cloud: BI Cloud Connector (BICC). The extracted data can be copied from these cloud resources and downloaded to local resources, before post-processing […]
Viewing all 33 articles
Browse latest View live