Monday, November 28, 2022

The curious case of the extra for loop iteration

 

Lately, I stumbled upon a lovely and elegant bug, yes it's pretty funny to call a bug elegant but my opinion here is quite firm.

Lets, consider the following short piece of code: 

package main

import (
"fmt"
"reflect"
)

type MyStruct struct{}

func (t *MyStruct) Func0() {
fmt.Println("func0")
}

func (t *MyStruct) Func1() {
fmt.Println("func1")
}

func main() {
var t MyStruct
searchTables := [3]uint32{0, 1}
for _, v := range searchTables {
methodName := "Func" + fmt.Sprint(v)
reflect.ValueOf(&t).MethodByName(methodName).
Call([]reflect.Value{})
}
}

Looking through the code we see a range loop running over a two-member array. calling a function by name, nothing really special here unless you see the output, which is :

func0
func1
func0

Wait, What was that? how did a two-member range loop become a three iterations output?

The reason for that lies in the fact the array was declared as a 3-member array, while initializing a static array the zero value of that array (in this case uint32(0)) will fill the last value in the array. 

Admittedly this is not a very hard bug to diagnose, but its a very easy bug to fall into, example scenario: you might have a declared array from which you would like to remove one member, and although that member has been removed you forgot to change the size of the array and the last value will be filled with the zero value of that type which can lead to a disaster in runtime. 

Hope this helps anyone 










 























Monday, June 13, 2022

How obvious date comparison in bash really is ?

So, What's the big deal anyway?

When I encountered this situation in a real-life scenario I could not imagine that I would invest so much time in comparing dates. after all comparing dates in bash has been documented a thousand times over the web. 

The solution is quite obvious if you have two date strings and would like to compare them. If you have two unix epoch on your hand the solution is evident as well. 

But, what happens when you have on one hand an epoch and on the other, an arbitrary date time and you would like to fully compare them. well, that's less documented. so after hours of trying out various solutions and failing this is what worked for me.

Code sample: 

# Assume we have an epoch named "first_epoch" 

# get the process uptime as a date formatted string

process_uptime_tmp=$(systemctl status <your_process> | grep ago | awk '{print $6" "$7}')

# convert the date into a comparable epoch

process_start_epoch=$(echo $(date -d"$process_uptime_tmp" +%s))

# compare epochs

if [ "$first_epoch" -gt "$process_start_epoch" ]

then

  echo "do something"

else

 echo "do something else"

fi 


Hope this helps anyone out there looking for a solution.

Cheers,

Tuesday, December 22, 2020

 Getting to know IBM VPC API


Introduction: 


"The world is changed. I feel it in the water. I feel it in the earth. I smell it in the air. Much that once was is lost, for none now live who remember it."

Those with a love for fantasy will surely recognize the quote above, in that quote "Galadriel"  is describing the situation which led to the world that became as it is now. Though we are not living in a world of goblins and orcs the world has indeed changed exactly like the description above.


Those old enough can remember that once you were buying software and getting boxes of hard media which you later install on the machines your organization has dedicated for that purpose. for most organizations handling their day to day IT operation is a task which is not easy especially when their business domain lies somewhere else (sometimes quite far away from the IT business), so the current trend for the "new world order" is managing the organization's IT infrastructure by a trusted partner using one of the cloud providers (sometimes several ones).


IBM has made major efforts to shift its business to the cloud, and IBM is progressing steadily to become a leader within the hybrid cloud space.  As part of the IBM VPC development for hybrid cloud, I can see more and more clients enlisting to the IBM cloud and moving their workloads to the IBM Cloud.

Typically when you start working with IBM hybrid cloud you would want to create some objects, you can of course use the user interface to do that, but using the user interface although very comfortable has the shortcoming of having an actual user to be active behind the keyboard to fill out all of the required fields.  so I am guessing that most of the clients would rather use a scripting development language to manage all the objects within their VPC's space.

Luckily IBM hybrid cloud provides an excellent API just for that case, enter IBM API for VPC IBM REST API

The purpose of this blog would be to explain the basic logic behind the IBM API for VPC and do some basic hands-on training to get you started with IBM cloud for VPC.

please note that the API examples within this blog are "partial" and are also subject to constant change.  


A little bit of architecture:


Every IBM cloud data center installation is called a multi-zone region or "MZR", This means that in each IBM cloud installation there are multiple "zones", each zone is equivalent to a "large building" which contains the compute, storage, network hardware, and software which comprises the "IBM cloud", these buildings are separate to make sure if a disaster occurs (flood, tsunami, etc..) only part of the cloud zones will be affected. and the client's business-critical missions can continue to function uninterrupted. 

Any resources within the cloud could be either regional resources, these resources exist in all of the zones (span all zones). such as a VPC for example. there are also zonal resources that exist only within one zone, a subnet for example.    


Getting things ready:

To start scripting for IBM Virtual Private Cloud you will need to start by installing the IBM cloud plugin on your machine : 

Detailed instructions on how to install the IBM Cloud CLI: IBM Cloud CLI


Lets set some environment variables : 


1. ResourceGroupID: 

Execute the following command : 

ibmcloud resource groups
export ResourceGroupID= <your_resource_group_id>
 
2. IAM token:
 
export token="$(ibmcloud iam oauth-tokens | awk '{ print $4 }')"; echo $token

3. Version  :

export version=2020-06-02 

4. API endpoint (based on your region): 

api_endpoint="https://us-south.iaas.cloud.ibm.com"

The logic behind IBM API for VPC  

IBM VPC REST API has 6 types of calls, which should cover all the lifecycle operations needed to manage VPC resources

  1.  POST
  2.  DELETE 
  3.  GET 
  4.  GET /{Resource}
  5.  PATCH
  6.  PUT

POST Calls 

A POST call typically indicates that we want to create a new resource for VPC, some examples of those resources could be a VPC, Network Acl, Security Group, Flow Log collector, and so on.
The response of the POST call should provide anything required to work on that resource at a later time.

Let's take a closer look at a sample POST call, which creates a VPC

curl -X POST -sH "Authorization:${token}" "$api_endpoint/v1/vpcs?version=$version&generation=2" -d '{"name": "myvpc1","resource_group": {"id": "'$ResourceGroupId'"}}' | jq

Sample response : 

{
  "id": "r134-efe0c24b-c89d-424b-94d9-f6e12856794a",
  "crn": "crn:v1:staging:public:is:us-south:a/<accountid>::vpc:r134-efe0c24b-c89d-424b-94d9-f6e12856794a",
  "href": "<baseuri>/v1/vpcs/r134-efe0c24b-c89d-424b-94d9-f6e12856794a",
  "name": "myvpc1",
  "status": "available",


Important sections in the response:

  1. Resource Identifier : (ID/CRN/href): every VPC resource in IBM cloud has a unique identifier, this identifier has three different "aspects", each of the main resources can be located and accessed through the API using the "id", "CRN", or "href"  properties, not all of the resources have all three identifiers but most of them have at least two. 

    The resource Identifier itself contains two sections 
    zone/region identifier - it indicates in what region or zone the resource exists "r134" in the case above. for more details about this please see the "introduction" section.
    UUID - a unique identifier that exists through all the stages of the resource lifecycle. 
  2. Name: the name of the resource, which is mostly a user-given name, must be unique within the scope of the IBM cloud, in several cases when the name has not been provided by the user as input the system will automatically generate a name for that resource, which in most cases can be changed anytime later using a PATCH call. 
  3. Status: indicated the lifecycle status of the object which has just been created, in many cases the status would be "available", but on some of the "heavier" resources where a longer provisioning period may be required the status would be more like "create_pending" which means that the provisioning process has started but has not yet completed. the status is of course a good point to start to investigate any issues with the object at hand.  

DELETE calls 

Delete calls are the exact opposite of POST calls, and typically they mean we want to remove a single resource from our current resource portfolio. A delete call will always delete a single resource.

Delete calls typically have various checks in place to make sure that the user cannot delete a resource which has resources attached to it and is required for its correct function, for example, a user cannot delete a VPC that still contains subnets.
In order to delete the VPC, the DELETE/<subnet_id> call is required to be executed prior to the actual call to the DELETE/<vpc_id> call. 

On the other hand, there are resources that are considered "subresources" of a major resource. once deleting a major resource, all sub-resources of that major resource are deleted implicitly. for example, deleting a VPC will implicitly delete the flow log collector for that VPC (if such exists).

Succesful DELETE calls do not return any content, but they do return a successful response of HTTP-204 (Deleted) code. if the delete operation was not successful an error code with a description of the error will be returned.

Sample DELETE call - delete VPC : 

curl -sS -X DELETE -H "Authorization: $token" $api_endpoint/v1/vpcs/<vpc_id>?version=$version&generation=2 | jq


GET calls 

GET calls can have either one of two forms, a GET call without a resource ID, this type of call is considered a LIST call. meaning that all resources need to be returned in a listed manner back to the client.
The List operation can return either one of two forms:
  1. Simple list - all resources are listed one by one, will typically be the case for small lists
  2. Paginated list - for long resource lists, it makes no sense returning all of the resources in a single bulk, therefore a page by page iterator pattern is used to return the data, the page size is set by the client call, the default page size is set 50.  
Sample List calls :


List VPC call: paginated response expected : 

curl -sS -X GET -H "Authorization: $token" $api_endpoint/v1/vpcs?version=2020-06-02 | jq

output :

{
  "limit": 50,
  "first": {
    "href": "<baseuri>/v1/vpcs?limit=50"
  },
  "total_count": 1,
  "vpcs": [
    {
      "id": "r134-b004970f-ffcd-465b-9b68-b73a735c8151",
      "crn": "crn:v1:staging:public:is:us-south:a/<accountid>::vpc:r134-b004970f-ffcd-465b-9b68-b73a735c8151",
      "href": "<baseuri>/v1/vpcs/r134-b004970f-ffcd-465b-9b68-b73a735c8151",

In order to traverse through the paginated list, we would need to provide the "start" parameter, this parameter will indicate on which of the resource id's the listing of the first page begins. 

example: 

curl -X GET -sH "Authorization:${token}" "$api_endpoint/v1/vpcs?version=$version&start=$VpcId" | jq



List routing table routes: non paginated response expected:

 curl -sS -X GET -H "Authorization: $token" $api_endpoint/v1/vpcs/$VpcId/routing_tables/$DefRoutingTableId?version=$version | jq

output:

{
  "id": "r134-9826a943-405e-49ec-b3bb-ec031338fcb5",
  "href": "<baseuri>/v1/vpcs/r134-b004970f-ffcd-465b-9b68-b73a735c8151/routing_tables/r134-9826a943-405e-49ec-b3bb-ec031338fcb5",
  "name": "sixteen-clamshell-feedable-thinness",
  "resource_type": "routing_table",
  "created_at": "2020-10-19T09:45:38Z",


Please note that the list operation might not return all of the data which the target resource might contain, therefore in some cases when we need the full information we will need to call the GET/<resource_id> operation.

GET/<resource_id> calls

 As opposed to a list operation. which is typically done when we do not have an identifier which we would like to investigate, the GET /<resource_id> call is done to return a single entity, the returned entity is fully "loaded" and is guaranteed to return all the information which is relevant to work with the object at hand.  

The GET /<resource_id> is a very efficient call both on CPU and network resources, as it returns only the "minimal" required data without the overhead of a list operation. so as a rule of thumb to make our calls more efficient we should prefer using GET calls instead of LIST calls.
It's also very efficient on the server-side as no data is manipulated during the course of the call. 

Sample get call - Get VPC : 

curl -X GET -sH "Authorization:${token}" "$api_endpoint/v1/vpcs/$VpcId?version=2019-10-03" | jq

response: 

{
  "id": "r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "crn": "crn:v1:staging:public:is:us-south:a/<accountid>::vpc:r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "href": "<baseuri>/v1/vpcs/r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "name": "myvpc",


PATCH Calls 

In some cases, we might want to create a resource and defer some of its configuration parameters to be specified at a later time. 
This is most convenient when you want to start something quickly and at a later time augment your configuration. 

The patch operation allows some configuration parameters to be modified at a later time. It's important to notice that not all of the configuration parameters can be modified later, some configurations are immutable after the creation of the target resource so, to change the configuration you will need to delete the object and re-create it using the new parameters.

Patch calls could provide a relatively simple service as changing the name of the resource, and on the other hand, the call can execute something much more complex like changing the target of the resource or even moving it to another availability zone. 

sample patch call - changes vpc name : 

curl -k -sS -X PATCH -H "Authorization: ${token}" $api_endpoint/v1/vpcs/$VpcId?version=$version -d '{ "name": "newname" }' | jq


response: 

{
  "id": "r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "crn": "crn:v1:staging:public:is:us-south:a/<accountid>::vpc:r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "href": "<baseuri>/v1/vpcs/r134-b004970f-ffcd-465b-9b68-b73a735c8151",
  "name": "newname",
   


PUT Calls 

The PUT call is currently the rarest call within the IBM VPC API, we use the PUT call in order to replace a resource or a collection of resources in a single PUT call. 
as opposed to patch where the target resource stays the same (same unique identifier), and only part of the resource is being modified. the PUT call replaces a resource entirely (new identifier/s)

sample call - replaces the routing table within a subnet :

 curl -X PUT -sH "Authorization:${token}" $api_endpoint/v1/subnets/$SubnetId/routing_table?version=$version -d '{"id": "'$RoutingTableId'" }' | jq 

response (modified routing table): 

{
  "id": "r134-9826a943-405e-49ec-b3bb-ec031338fcb5",
  "href": "<baseuri>/v1/vpcs/r134-b004970f-ffcd-465b-9b68-b73a735c8151/routing_tables/r134-9826a943-405e-49ec-b3bb-ec031338fcb5",
  "name": "sixteen-clamshell-feedable-thinness",
  "subnets": [
    {
      "id": "7392-04925f2c-9559-4108-866e-f2ad4e8a8d89",
      "crn": "crn:v1:staging:public:is:us-south-3:a/<accountid>::subnet:7392-04925f2c-9559-4108-866e-f2ad4e8a8d89",
      "href": "<baseuri>/v1/subnets/7392-04925f2c-9559-4108-866e-f2ad4e8a8d89",
      "name": "subnet-1",


conclusion: 

During the progress of this post, I tried to explain the logic which comprises the IBM API for VPC (nextgen), I believe it's very beneficial for anyone who tries to use scripting for IBM API for VPC to get to know a little more than just to issue the commands themselves. 
Although this post was quite long it does not do a deep dive into any one resource on the API, instead, it tries to establish the basic concepts laid by the API.
 

Thursday, December 19, 2019

Creating fakes for IBM Cloud Object storage using counterfeiter

Intro : 


I have encountered a problem lately where I needed to create fake objects for IBM cloud objects storage objects (COS) in order to fake its behavior during unit tests, seemingly a trivial problem but I actually didn't find many resources on the subject.

So here are some useful links for anyone thinking about how to achieve this:

The ingredients: 


Counterfeiter: https://github.com/maxbrunsfeld/counterfeiter
A framework for automatically creating Fakes and Mocks in GoLang (needs to be installed on your machine, follow instructions on the link above)


IBM Cloud Object Storage go SDK: https://github.com/IBM/ibm-cos-sdk-go
A cloud object storage solution made available by IBM

The implementation process for S3 interface:

  • clone the IBM COS repo to your machine 
  • Get to the COS s3 interface file under /service/s3/s3iface
  • Annotate the interface.go file as follows:


package s3iface
//go:generate go run github.com/maxbrunsfeld/counterfeiter/v6 -generate
import (
   "github.com/IBM/ibm-cos-sdk-go/aws"   "github.com/IBM/ibm-cos-sdk-go/aws/request"   "github.com/IBM/ibm-cos-sdk-go/service/s3")

And later on on the same file

//counterfeiter:generate . S3APItype S3API interface {
   AbortMultipartUpload(*s3.AbortMultipartUploadInput) (*s3.AbortMultipartUploadOutput, error)
   AbortMultipartUploadWithContext(aws.Context, *s3.AbortMultipartUploadInput, ...request.Option) (*s3.AbortMultipartUploadOutput, error)
   AbortMultipartUploadRequest(*s3.AbortMultipartUploadInput) (*request.Request, *s3.AbortMultipartUploadOutput)




  • Run counterfeiter from that directory
              $ go generate
            Writing `FakeS3API` to `s3ifacefakes/fake_s3api.go`... Done


            And you're done. Good luck







    Thursday, December 14, 2017

    Connecting IBM Streaming analytics in the cloud with DB2 on premise



    Cloud is a big thing at the moment,  and it seems cloud is here to stay.  While organisations are continuously moving to cloud solutions some questions are being raised, especially questions regarding how to connect our current datasets and databases which are located on premise to my newly created applications which reside(s) in the cloud in a safe and secure manner .

    Thankfully this question has an answer. IBM has addressed this concern elegantly by implementing a solution named the "IBM Secure Gateway for Blue-mix". 



    The Secure Gateway Service provides a quick, easy, and secure solution for connecting anything to anything. By deploying the light-weight and natively installed Secure Gateway Client, you can establish a secure, persistent connection between your environment and the cloud. Once this is complete, you can safely connect all of your applications and resources regardless of their location. For more information about the secure gateway service take a look here . 


    During the flow of this article we will describe the process of connecting IBM Streaming Analytics Service in the cloud to a local (on premise) DB2 Express-C installation (running in docker). 

    Here is the overall solution outline:













    Prequisites:


    Setting up the database (on premise):

    After downloading the relevant docker image (i am using mac for the purpose of this article). Run the installer ("IBM_Db2_Developer_Community_Edition-1.1.3.dmg" for mac).

    Make sure when running the docker image to use the defaults .

    User : db2inst1
    Password: db2inst1

    When the installer finished go to your docker image manager (I am using kitematic for the purpose of this article), and click the exec button as illustrated by the image below:





    The Docker image's terminal will open after a few seconds. Enter the following list of command one by one.

    $su - db2inst1

    $db2

    $connect to sample

    CREATE TABLE EVENTS(ID INTEGER NOT NULL,VALUE INTEGER NOT NULL)


    Please take a look at the following screenshot to verify your actions and their results:



    Creating the IBM Streams application:

    Open the quick start virtual machine and then open the IBM Streams Studio. Import the SPL application which is located here into your Streams Studio workspace. Take a look at the image below to validate that all of the resources were imported as follows .















    When the project has been imported change any properties within the code you might need note that host and port are submission time values, so no need to change them, build the project and create a sab (Streams Application Bundle) file. Save the sab file for later use, we will use it later during the flow of this article.

    Creating the Secure Gateway

    The Secure Gateway is the link between our application (sab) which we have built from code and will run on the Streaming Analytics service on the IBM cloud and the DB2 Express-c database which we have configured on your local machine .

    Lets go ahead and create the Gateway service.

    Go to the IBM Cloud (blue-mix) dashboard and click the "create resource" button, search for the string "gateway" and choose the "Secure gateway" under the platform section (see below).



















    Later, choose your region, organisation and space. Then click create . Then click the "add gateway" .

    Afterwards click the "Add Gateway" then take all the defaults and click the "Add Gateway" button (see image below). 


















    Next we will add an on-premise destination to push the events into .

    Click the destinations tab --> click the "+" button -->  choose "on-premise" --> next --> enter the IP of the machine running the docker image (the machine which binds the 50000 port, your machine basically) and port number 50000 --> next--> click next --> click next --> give the resource the name "PushToDb2"  and then create (please follow the images).






































    And the finished configuration should look like :












    If you see the red hand on your destination don't worry, all it means that there are no connected clients at the moment. During the next step we will connect the on premise client to the secure gateway .

    Adding the secure gateway client

    Click on the clients tab on the right --> click the "+" button --> click the "docker icon" --> copy the docker run command to your clipboard.















    Open a terminal session on your machine and run the "docker run" command  which you have copied earlier.Run the command (take a look at the screenshot below) .


















    When the command finishes, verify that the secure gateway client docker has been created on your docker manager , and it has reported that "The Secure Gateway tunnel is connected" (take a look at the image above) .

    Once verified , take a second look at the Secure Gateway console and verify that you can see 1 destination and 1 client . And make sure that the client is now connected (verify with the image below).


















    Go back to the Destinations tab again, click on the settings icon and copy the cloud host post combination (image below)



















    Configuring the ACL on the client side (on-premise)

    In order for the client to be able to receive calls from outside we will need to add some hosts and ports to its ACL (Access Control List). Let's go to the clients terminal session and run the following commands:

    acl allow <your_machine_host>:<your_db2_port>

    acl allow <your_cloud_resource_host>:<your_cloud_resource_port> 

    Take a look at the screenshot below for verification purposes:







    Deploying the streams application on the streaming analytics service (in the cloud) 

    Go back to your blue-mix dashboard and start your streaming analytics instance, once started click the launch button to login to the system.

    Click the play button to submit the application (sab file) and click submit .  A window will open to get the submission time values required to connect to the remote system.

    Enter your "cloud host" and "cloud port": into the submission time values prompt screen like in the screenshot below. 


















    Note :make sure not to put your database host and port , but the gateway's host and port !

    When the application has been submitted verify that the streams application is up and running and is pushing tuples to the database (Image below) :


















    Run  the following select command against the events table to verify that new tuples are arriving 

    SELECT COUNT(*) FROM EVENTS WITH UR 



    Congratulations, your cloud streaming analytics service and on premise database are now connected.



    Wednesday, November 29, 2017

    Getting your UAA token to access IBM cloud services


    It seems that lately i am having to deal with lots of security related issues at work (kerberos, IAM, UAA , bearer tokens etc ..) and to be more specific its more related to the authentication part .

    If you need to access a service in IBM cloud (formerly known as blue-mix) which requires a UAA token either by username & password or using an API key authentication , both using java code you just got lucky !

    This is an explanation of how to create an API key in IBM cloud: creating an API key


    package ext.security;
    
    import java.io.BufferedReader;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.util.Scanner;
    
    public class InputHandler  {
    
        public String getToken(Process process){
            String token = "" ;
            try {
               BufferedReader in = new BufferedReader(new InputStreamReader(process.getInputStream()));
               String line;
               StringBuilder builder = new StringBuilder();
               while ((line = in.readLine()) != null) {
                   builder.append(line);
               }
               process.waitFor();
               in.close();
               token = builder.toString();
           }catch(Exception e){
               e.printStackTrace();
           }
           return token ;
        }
    }
    
    
    
    
    
    
    
    
    package ext.security;
    
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import java.util.List;
    import java.util.logging.Logger;
    
    public class UAATokenAccessor {
    
        private static final String TOKEN_PREFIX = "{“access_token”:“" ;
        private Logger logger = Logger.getLogger(this.getClass().getName());
    
        public String getUAATokenUsingUserNamePass(String username, String password, String UAAAuthURL) throws IOException,InterruptedException{
            logger.info("A request to get a UAA token was recieved") ;
    
            ProcessBuilder pb = new ProcessBuilder(
                    "curl",
                    "--request","POST",
                    "--header","Authorization: Basic Y2Y6",
                    "--header","Content-Type: application/x-www-form-urlencoded",
                    "--data","grant_type=password&username="+username+"&password="+password,
                    UAAAuthURL);
            System.out.println("!-------------------------------------------------!");
    
            List<String> command = pb.command();
            pb.redirectOutput(ProcessBuilder.Redirect.INHERIT);
            pb.redirectError(ProcessBuilder.Redirect.INHERIT);
    
            for(int i=0;i<command.size();i++){logger.info(command.get(i)+" ");
            }
            System.out.println("");
            Process process = pb.start();
            InputHandler handler = new InputHandler();
            String tokenRaw = handler.getToken(process);
            String token = formatToken(tokenRaw);
            logger.info("UAA Token returned : " + token);
            return token;
        }
    
        public String getUAATokenUsingAPiKey(String apiKey,String UAAAuthURL) throws IOException,InterruptedException{
    
            logger.info("A request to get a UAA token was received") ;
    
            ProcessBuilder pb = new ProcessBuilder(
                    "curl",
                    "--insecure",
                    "--header","Content-Type: application/x-www-form-urlencoded;charset=utf-8",
                    "--header","Accept: application/x-www-form-urlencoded;charset=utf-8",
                    "--header","Authorization: Basic Y2Y6",
                    "--data","grant_type=password&username=apikey&password=" + apiKey,
                    UAAAuthURL);
    
            System.out.println("!-------------------------------------------------!");
            Process process = pb.start();
            InputHandler handler = new InputHandler();
            String tokenRaw = handler.getToken(process);
            String token = formatToken(tokenRaw);
            logger.info("UAA Token returned : " + token);
            return token;
        }
    
        private String formatToken(String tokenRaw){
            String token = null ;
            int tokenEnd = 0 ;
            String tokenRawNoPrefix = tokenRaw.substring(TOKEN_PREFIX.length(),tokenRaw.length());
            tokenEnd = tokenRawNoPrefix.indexOf(",");
            token = tokenRawNoPrefix.substring(0,tokenEnd-1);
    
            return token ;
        }
    }
    
    
    package ext.test;
    
    import ext.security.UAATokenAccessor;
    import org.junit.Test;
    
    import java.util.logging.Logger;
    
    public class UAATokenTest {
    
        private Logger logger = Logger.getLogger(this.getClass().getName());
        
        final static String bluemixUAATokenAuthURI = "https://login.ng.bluemix.net/UAALoginServerWAR/oauth/token" ;
        
        final static String prodApiKey= "<your-generated-apikey>" ;
        final static String bluemixUser = "<your_ibmid>";
        final static String bluemixPassword = "<your_ibmpassword>";
    
        @Test
        public void doTest(){
    
            doAPIAuthAuthentication(prodApiKey,bluemixUAATokenAuthURI,"Authentication with API key");
    
            doUserPassAuthentication(bluemixUser, bluemixPassword,bluemixUAATokenAuthURI,"Authentication with username and password");
        }
    
        public String doUserPassAuthentication(String user,String password, String authUri, String description){
            logger.info("testing:" + description);
            String authorization = null ;
            final UAATokenAccessor uaaAccessor = new UAATokenAccessor();
            try {
                authorization = uaaAccessor.getUAATokenUsingUserNamePass(user, password, authUri);
                logger.info("result for " + description + ": " +authorization);
            }catch(Exception e){
                e.printStackTrace();
            }
            return authorization ;
        }
    
        public String doAPIAuthAuthentication(String apiKey, String authURI, String description){
            logger.info("testing:" + description);
    
            String authorization = null ;
            try {
    
                final UAATokenAccessor uaaAccessor = new UAATokenAccessor();
                authorization = uaaAccessor.getUAATokenUsingAPiKey(apiKey, authURI);
                logger.info("result for " + description + ": " +authorization);
            }catch(Exception e){
                e.printStackTrace();;
            }
    
            return authorization ;
        }
    }
    
    
    
    
    Have fun !