Insights on Application Programming Interface (API)-led-Connectivity

The ‘API-led-Connectivity’ is the buzzword in the industry, especially if you are into the integration domain with API-led integration platforms like MuleSoft. This alluring buzzword claims to connect and expose the organization’s access, and deliver the integrations at speed and reuse.

What exactly does API-led-Connectivity do? 

API-led-Connectivity is the method to connect the data into the application via reusable and purposeful APIs. The APIs are developed for a unique role like unlocking the data from the systems, composing them into processes, and delivering a more significant experience.

If the organization adopts API-led-Connectivity, every stakeholder in the business can enhance their capabilities in delivering the best projects and capable applications through discovery and self-service.

Why is API-led-Connectivity important? 

The API-led-Connectivity decentralizes enterprise data access and its dependence on the reusable APIs to create new capabilities and services. The reusable assets produced can unlock critical systems like data sources, legacy applications, and SaaS apps.  IT teams can reuse them as API assets and design the process-level information. This approach increases the agility, speed, and productivity of the integration process.

APIs that enable API-led-Connectivity 

The API-led-Connectivity connects and exposes the assets. This approach can help to connect things point-to-point. Each asset becomes a modern and managed API, which makes it visible through self-service without losing control.

The APIs used in the API-led-connectivity approach has three categories-

  • System APIs
  • Process APIs
  • Experience APIs

System APIs 

System APIs give us the means to insulate the data consumers from complexity or change the underlying systems. Once they are built, consumers can access the data without understanding the details about the underlying systems. System APIs are fine-grains, independent, and highly reusable.

They integrate well and also support more than one value stream or enterprise capability. It defines the business ownership and shared usage contexts which can be complex stuff. If the absence of shared infrastructure business capability, enterprises may fall back on other methodologies to derive their ownership in simple terms. System APIs are the ones that expose the underlying back-end systems and insulate the caller from the changes to the underlying assets.

In allocating System API ownership, one should consider the short-term and long-term goals of a shared System API, which simplifies the decision. For example, a shared system that does not require minimal improvement but must always be available to reduce the risk associated with a business owner’s role.

The technician can get a balanced view under pressure and has a rational experience that goes hand in hand with the system’s power. On the other hand, a shared system used to introduce new products and applications can prepare for the near future purposes of a legitimate business owner with a sound basis for technical concern.

Another high assumption is that shared resources require shared leadership. Establishing shared infrastructure management committees can achieve high results in a single identity through the diversity of knowledge.

Process APIs 

Process APIs create business value by working through single or multiple systems. It is generally done by using one or multiple systems of APIs.

The API process provides ways to integrate data and organize multiple system APIs for a specific business purpose (e.g., 360-degree customer views, order fulfilment, etc.). They are often used to build a business where attributes are managed in many records systems (e.g., SAP, Salesforce, etc.) through various business functions (CRM, Customer Support, etc.).

The API process reflects the business services and usually supports the business delivery portfolio (i.e., products and services). Identity API IDs typically reside with the owner of a value stream that includes supported products and services. If this does not happen, increasing levels of cooperation are needed among many stakeholders. This collaboration can be achieved by organizing the leading organization that manages the Process API and its traffic, production cycle, and performance management strategy.

Concerns about the Process API’s identity can be even more complex than the System APIs. The number of communication sites is greater than that of the standard System API, which only meets one recording system. In addition, integrated services are highly dependent and can create significant difficulties in controlling the quality-of-service, problems involved, including functionality, error management, tracking, segmentation, etc.

As the Process APIs are so close to concrete business collaboration, assigning technical staff to play the role of a business owner is becoming increasingly unlikely and becoming more problematic. On the other hand, given the technical difficulties of integrated services that have been drawn away from experience, the definition of a single owner also has its problems with trade-offs.

Experience APIs 

Experience APIs are similar to the processing API because they integrate many other API content, features, and functionality. However, unlike API processes, Experience APIs are tied primarily to a unique business context and organize data formats, contact times, or agreements (rather than processing or creating them) for a particular channel and content.

Experience APIs are a way to customize data for the convenience of its targeted audience, all from the same data source, instead of setting up an individual point-to-point integration for each channel. Experience API is usually done to build the original API designed for the intended user experience.

Additionally, Experience APIs inform and share a presentation platform specific to a unique business context. It provides a way to deliver pre-formatted data. It is particular to the intended audience. It can quickly process the data to suit the intended business environment.

Remember, Experience APIs are designed to provide a way for collaborative application developers to quickly provide data to previous applications used by customers, partners, and employees. Therefore, Experience APIs follow a “pre-build” approach – that is, the API contract is specified before the actual implementation of the API. API buyers run it in collaboration with User Experience experts who determine that the internet connection should use it from a design perspective.

The Way Forward

The API-driven connectivity approach to delivering IT projects ensures time and budget savings on the first project. It also creates reusable assets that build resilient infrastructure for better visibility, compliance, and governance to meet business needs and a long-term stored value.

It gives you the ability to move faster on your first project and accelerate progressively from your second project onwards, thanks to renewable assets and organizational strengths. An API-led-connection frees up resources and allows you to refresh and move faster.

MuleSoft CloudHub Vs Runtime Fabric

When it comes to deploying Mule applications and APIs on a managed infrastructure (whether on-premises or private IaaS like Microsoft Azure or AWS), Anypoint Runtime Fabric is the right solution. However, for any Mule application deployment, Runtime Manager is available for both Private Cloud and Anypoint Platform.

In the current scenario, many businesses aim for digital transformation and hence consider CloudHub for scaling and advanced management features. But with Anypoint Runtime Fabric, businesses can score further flexibility and more coarse control to balance performance & scalability.

CloudHub and Anypoint Runtime Fabric are both the options to go for while deploying Mule applications. Here we bring you the difference between both the options.

CloudHub

CloudHub is an (iPasS) Integrated Platform as a Service, a multi-tenant, secure, highly available service, managed by MuleSoft and hosts on public cloud infrastructure where MuleSoft manages both control plane and runtime plane.

Features of CloudHub-

  • Provides 99.99% availability, automatic updates, and scalability options.
  • It is available globally in multiple regions.
  • When we deploy the application in cloud hub, the runtime is deployed as an individual AWS EC2 instance.
  • Worker sizes in cloud hub can start from 0.1v Core.
  • Application logs can be viewed from Runtime Manager
  • Perform Log forwarding to external service.
  • Monitoring can be performed via Anypoint Monitoring.

 

CloudHub provides two types of load balancers-

Share Load Balancer: Provides basic functionality like TCP load balancing.

Dedicated Load Balancer: Can perform load balancing among cloud hub workers and define SSL configurations.

Anypoint Runtime Fabric

Anypoint Runtime Fabric is a container service that automates the deployment and orchestration of your Mule applications. The execution of containers is based on a Kubernetes pod.

On the other hand, runtime fabric can be used on a customer-hosted infrastructure, whether on-premise or cloud. Even though its client’s own infrastructure, we can still have the same benefits as cloud hub. For example, horizontal scaling & zero downtime.

One of the conditions to use RTF is that the client is ready to share the metadata information to MuleSoft as the control plane is managed by MuleSoft, and the runtime plane is taken care of by the customer.

RTF features include-

  • Used to deploy mule runtime on AWS, azure, baremetal, and VM’s.
  • Create the container, infrastructure to deploy the application.
  • One deployment does not affect other applications, even though they are in the same RTF. The ability to run multiple versions of mule runtimes on the same servers.
  • Worker sizes in RTF can start from as low as 0.02v Core.
  • RTF provides an internal Load balancer for processing inbound traffic.
  • Application logs can be found via Ops Center, and logs can be forwarded to an external service.
  • Monitoring of applications, servers, workers instances can be taken care of from Ops Center.

The Concluding View

If you are looking to design, develop or build applications & API at an accelerated rate or want to deploy applications on legacy system or on cloud with automated security & threat protection at every level; Anypoint Runtime Fabric is the solution.

How to create Guided Decision Table in Drools

1. Create new project UPLOAD <project-name>.

2. Add data object asset (e.g., upload).

Picture1

3. Add asset Guided Decision Table using “Add asset” option from home page.

4. Enter GDT name, choose package and select an option from Hit Policy drop down. You can select from any of the listed policies based on the requirement. In this example, we are using First Hit.

Picture2

5. Click OK button, then you will see the GDT as below.

Picture3

6. Insert columns by clicking on Columns tab, there will be a popup window as below.

Picture4

7. We should configure the mentioned options as in below screenshot.

Picture5

7.1 New Column:

Add a condition -> next ->

7.2 Pattern:

Picture6

Then click on create a new fact pattern button, it will show a new popup.

Picture7

We must provide the Binding value and click Next.

Picture8

7.3 Calculation type:

Select literal value radio button and click next.

You can select other options based on your requirement.

Picture10

7.4 Field:

Choose field value from dropdown and Binding is optional.

Picture11

7.5 Operator:

Choose equal to option from drop down and then next

Picture12

7.6 Value options:

Here we can provide multiple options with comma separate in value list. Then it will show in Default tab in drop value like below.

Picture13

I have given gold and diamond; it is showing same in drop down. Then click next.

7.7 Additional Info:

Header filed is mandatory filed and click finish button.

Picture148.Below is the table structure.

Picture15

9. Here we can add rows using Append row under insert button drop down.

Picture16

10. We can add multiple rows using same option. And if we double click on gold tab it shows list of options which are, we added in previous steps.

Picture17

11. Do not forget save.

Picture18

12. Using above steps, we can add multiple columns as per our requirement.

13. Here adding one more column action.

14. Click on insert tab and choose insert column.

Picture19

15. Select set the value of a field and click on next.

Picture20

16. Choose upload[auto] pattern by clicking dropdown of pattern filed and then click next.

Picture21

17. Select discount filed from field dropdown.

Picture22

18. Provide optional values then it will show in drop down like below.

Picture23

19. Given Header description as Action and click Finish.

Picture24

20. Below is the GDT final structure.

Picture25

21. Click on save button and then click on validate button and it should be successfully validated.

 

Use Test Scenario Asset for testing instead of POSTMAN:

1. Once deployment got completed, add Test Scenario Asset using add asset option.

2. Below is the normal Test Scenario Asset.

Picture26

3. We provided condition and action details below.

Picture27

4. Then start testing using play option and the response is like below.

Picture28

5. If something is wrong, it will show like below.

Picture29

CI-CD EMAIL ALERT CONFIGURATION ON SPECIFIC FILE CHANGE

CI/CD Email Alert Configuration on Specific File Change

Background:

A notification has to be sent to the developer who made a commit to a configuration file, such that the retrofit of the file to the other repos would not be missed and there by avoid issues upfront. At present in git there is a feature for notification by using the “GIT Integrator service call email on push”. However, this service is not able to restrict the mail notification to a particular file change which is creating the issue as the inboxes are flooded with emails for every check-in.

Steps to achieve

  • To configure an email notification for any specific file changes during git commit, we can use git tag “only”. Following is an example for the git script which is used for the same.

send_email: —-Git Stage

  stage: notify

  only: —-Git Command Only

    changes:

      – sb/temp/azu/*.prop

  script:

    – echo `git log -p -2 *.prop | head -50 > commit_history.txt`

    – |

      if [[ -z “$REPO”]]; then

         curl -s –user “api:$API_KEY” “https://api.mailgun.net/v3/$DOMAIN/messages” -F from=’Gitlab <gitlabsuer@domain.com>’ -F to=$GITLAB_USER_EMAIL -F to=$GITLAB_USER_TLD -F to=$GITLAB_USER_NLD -F cc=$GITLAB_USER_TWO -F cc=$GITLAB_USER_PM -F subject=’Gate Property file Modified’ -F text=’Retrofit property files to other Repos’ -F attachment=’@commit_history.txt’

      else

         echo “It’s not allowed to trigger”

      fi

  • In the above yml code snippet, it’s the “only” tag where we have condition for the execution of the script. Whenever there is a change in the property files then immediately it recognizes the change and triggers this job.
  • An Email is sent using the mail gun service provider for the particular property file change. Without having our own SMTP Service provider, we can create an account in the Mail gun, it provides an API which can be hit with the required parameter such that it does the job of sending notification for us. It’s a pretty cool feature and easy to use. The service is ‘Free’ for a couple of months that is accompained with 5000 emails per month.
  • Sample curl command to hit the email API:

curl -s –user “api:$API_KEY” “https://api.mailgun.net/v3/$DOMAIN/messages” -F from=’Gitlab < gitlabsuer@domain.com >’ -F to=$GITLAB_USER_EMAIL -F to=$GITLAB_USER_TLD -F to=$GITLAB_USER_NLD -F cc=$GITLAB_USER_TWO -F cc=$GITLAB_USER_PM -F subject=’Property file Modified’ -F text=’Retrofit property files to other Repos’ -F attachment=’@commit_history.txt’

  • Private Key and service domain from Mailgun are required to put them in yml. Below steps help us to find the following things from mail gun.
  • Create an account with mail gun using the below link:

1. https://signup.mailgun.com/new/signup  (Don’t click on the Add payment info now, we can sign up to the site without credit/debit card details.)

2. Provide all the sign-up details like email and name

3. Once login you will be able to see the below screen:

Picture1

4. Once login you will be able to see the below screen:

Picture2

5. Click on the domain then you will be able to see the domain name, also, in the right side we will have the recipients where we have to add the mail id of the person to whom the notification should be sent, currently, using free account we can only send it to 5 people.

Picture3

6. Click on the setting and then API keys to get the private key:

Picture4

7. All the variables are given in the GITLAB-UI under settings/ci-cd/variables as a global part.

Picture5

  • We can also use the smtp parameters instead of the API: sample for the same is below:

./swaks –auth \

        –server smtp.mailgun.org \

        –au postmaster@sandbox43a6751f9b1c43faaf8fa187eadc3a0f.mailgun.org \

        –ap 583627bfa8d4a292c4aa779f9b61aafb-ba042922-a9cb1219 \

        –to recepient@domain.com \

        –h-Subject: “Alert!!! Property file changed” \

        –body ‘check the commits’

Reference: https://documentation.mailgun.com/en/latest/

CUSTOM LIBRARY

Custom Library

To perform custom manipulations and logic in a process and to accomplish unique and advanced requirements that fail outside of the native functionality of the Boomi platform, Custom scripting is been written. Such integrations need custom files, a thirdparty custom scripting libraries or specific libraries for connectors. 

Custom Library components are collections of Java Archive (JAR) files that you can use to support those in Boomi integration processes. 

Creating and deploying Custom Library components enables you to manage JAR files through the Boomi Integration UI. You can deploy Custom Library components to any Atom, Molecule, Atom Cloud, or environment. 

To configure the custom library in atom management below are the steps: 

Upload external libraries to the Boomi account:

Setting>Account details>manage Account libraries>upload a JAR file. 

Create a custom library component in Boomi: 

In custom librarytype in Boomi, we have three types of components: General, Scripting and Connector. 

Click on create new Custom Library Component from the process and in the configuration tab, define Custom library, Component Name, and Folder and click on create. Once you create a component in dropdown list select the library we want to create, if we select customer library type as a connector, we have to provide connector type and we can add JAR files from the previously uploaded custom JAR files and click on save. 

Picture2

Deployment of the custom library component: 

To Deploy the Custom library: Create Package component with all details and create it. 

Once all package components successfully created, click on the Attachments tab to attach environment and version and click on Deployment tab to deploy the component. 

Restart Atom: 

Before you can use the library in your integration process, we will need to restart the atom. Manage > Atom Management > 

Select an atom > click on Atom information tab > click on “Restart Atom”. 

 Removing files from a custom library: 

You can remove custom JAR files from a Custom Library component, but you should not do so if the component is currently deployed. 

Custom library component > Select Jar File > Delete > Save. 

 Migrating existing JAR files to use custom libraries: 

Existing JAR files that were placed in user library folders manually continue to work as they did before the introduction of Custom Library components. However, as a best practice, Boomi recommends that you migrate any manually deployed JAR files to custom libraries that can be managed through the Boomi Integration UI. 

To make existing JAR files known to Boomi Integration, follow the normal custom library deployment flow: 

 Upload JAR file > Create Custom library component > Deploy. 

 When the JAR files are deployed, Boomi Integration checks for existing files of the same name: 

  • If the file name and the contents of the file are the same, the file is not replaced. 
  • If the file name is the same but the contents are different, the new JAR file is deployed with a unique suffix to avoid a naming conflict. The existing file is marked for deletion. 

In either case, the JAR file is now recognized by the Boomi Integration and can be managed through the UI.