IIB/ESQL: How to test if UUID is valid, if not generate one - uuid

I have a message flow in IBM Integration Bus.
I have some input that specifies a UUID, I want to test if that UUID is valid and if it is not I'd like to generate one.
Downstream if I call something like UUIDASCHAR I get a BIP2582 exception for (Invalid UUID).
DECLARE myUuid BLOB InputRoot.XMLNSC.someUUID
SET someUuidChar = UUIDASCHAR(myUuid); -- This throws exception if myUuid is invalid
I'm not sure how to tackle this in esql, this is the type of logic I'm looking for:
if (is_valid(uuid)) then
set output_uuid = uuid
else
set output_uuid = generated_uuid
end if
Thanks

You can put your validity check and UUID generation in 2 different ESQL files. Your first ESQL should call UUIDASCHAR normally, and Failure terminal of its compute node should be connected to compute node of second ESQL, which generates a new UUID.

Related

Gatling JDBC with dynamic where clause

I'm using Gatling with the JDBC feeder and would like to dynamically add a parameter to the JDBC feeder's where clause based on the response from a previous request. Here is my example, I'm trying to do a post that will create a user, then have the feed grab the user's generated UUID using the userId returned from the create user request, then post some data with the UUID.
val dbConnectionString = "jdbc:mysql://localhost:3306/user"
val sqlQuery = "SELECT user_uuid FROM users where user_id = '${userId}'"
val sqlUserName = "dbUser"
val sqlPassword = "dbPassword"
val sqlQueryFeeder = jdbcFeeder(dbConnectionString, sqlUserName, sqlPassword, sqlQuery)
val uuidPayload = """{"userUUID":"${user_uuid}"}"""
val MyScenario = scenario("MyScenario").exec(
(pause(1, 2))
.exec(http("SubmitFormThatCreatesUserData")
.post(USER_CREATE_URL)
.body(StringBody("""{"username":"test#test.com"}""")).asJson
.header("Accept", "application/json")
.check(status.is(200))
.check(jsonPath("$..data.userId").exists.saveAs("userId")))
.feed(sqlQueryFeeder)
.exec(http("SubmitStuffWithUUID")
.post(myUUIDPostURL)
.body(uuidPayload).asJson
.header("Accept", "application/json")
.check(status.is(200)))
)
I have verified the following:
1) The user data does get inserted into the DB correctly on the form post
2) The userId is returned from that form post
3) The userId correctly saved as a Gatling session variable
4) The SQL query will execute correctly if I hard-code the user id variable
The problem I have is that when I have the Gatling ${userId} parameter on the JDBC feeder's where clause it appears the userId variable isn't used, I get an error saying java.lang.IllegalArgumentException: requirement failed: Feeder must not be empty. When I replace the ${userId} with a hard-coded userId everything works as expected. I would just like to know how I can use the userId session parameter in my JDBC feeder's where clause.
The call to jdbcFeeder(dbConnectionString, sqlUserName, sqlPassword, sqlQuery) to create a jdbc feeder only takes strings as parameters, not gatling expressions (like ${userId}).
In your scenario as posted, you are not really using feeders as intended - they are generally used to have different users pick up different values from a pool of values whereas you have a static user name and are only taking the first value returned from the db. It's generally not a good idea to have fetching of external data in the middle of your scenarios as it can make timings unpredictable.
Could you just look up and hardcode the user_uuid? The best approach would be to get all your user data and look up things like uuids in the before block of the simulation. However, you can't use the gatling DSL there.
You could also use a scala variable to store the user_uuid and define a feeder inline, but this would get messy if you do need to support multiple users

Fail Flink Job if source/sink/operator has undefined uid or name

In my jobs I'd like every source/sink/operator should to have uid and name property defined for easier identification.
operator.process(myFunction).uid(MY_FUNCTION).name(MY_FUNCTION);
Right now I need to manually review every job to detect missing settings. How can I tell Flink to fail job if any name or uid is not defined?
Once you get a StreamExecutionEnvironment you can get the graph of the operators.
When you don't define a name Flink autogenerates one for you. In addition if you set a name, in case at least of sources or sinks, Flink adds a prefix Source: or Sink: to the name.
When you don't define a uid, the uid value in the graph at this stage is null.
Given your scenario, where the name and uid are always the same, to check all operator have been provided with the name and uid you can do the following:
getExecutionEnvironment().getStreamGraph().getStreamNodes().stream()
.filter(streamNode -> streamNode.getTransformationUID() == null ||
!streamNode.getOperatorName().contains(streamNode.getTransformationUID()))
.forEach(System.out::println);
This snippet will print all the operator that doesn't match with your rules.
This won't work in the 100% of cases, like using a uid which is a substring of the name. But you have here a general way to access to the operators information and apply the filters that fits in your case and perform your own strategy.
This snippet can de used as part of your CI or use it directly in your application.

How to get the output of a powershell command into an array

I am trying to get the output of a powershell command into an array, but it seems not to work. In fact I want to address the output with a row and a column index.
e.g.
$a=Get-Service
With output (part)
Status Name DisplayName
------ ---- -----------
Stopped AeLookupSvc Application Experience
Stopped ALG Application Layer Gateway Service
Stopped AppIDSvc Application Identity
Running Appinfo Application Information
Stopped AppMgmt Application Management
I want to address for the second line the DisplayName, e.g.
$a[2][2]
And it should give then
Application Layer Gateway Service
But this does not seem to work.
Can anybody help?
This type of question makes me think that you're probably coming from a Unix background, and are accustomed to having to deal with indicies and column index, that sort of thing.
Fundamentally, PowerShell is an object-oriented scripting language. You simply don't need to do what you're asking about here.
For instance, if you want to capture the results, then grab a property for one of the objects, here's how that's done.
First, capture the output.
$a=Get-Service
Now, you want a particular property of a particular entity. To get that, index into the object you want.
>$a[2]
Status Name DisplayName
------ ---- -----------
Stopped AJRouter AllJoyn Router Service
To select the .DisplayName, all you have to do is append that to the end of your previous command.
> $a[2].DisplayName
AllJoyn Router Service
If you want to select multiple values, you could use this approach instead.
#Select multiple values from one entity
$a[2] | select DisplayName, Status
>DisplayName Status
----------- ------
Application Layer Gateway Service Stopped
#Select multiple values from each in the array
$a | select DisplayName, Status
>DisplayName Status
----------- ------
Adobe Acrobat Update Service Running
AllJoyn Router Service Stopped
Application Layer Gateway Service Stopped
Application Identity Stopped
This is not possible without a mapping from property names to array indices. Note that what you see in the output is just a partial list of properties (defined in an XML file somewhere). So there isn't even an easy way to convert those to array indices.
However, I also don't quite understand your need here. You can get the second service with $a[1], as expected. And then you can get its DisplayName property value with $a[1].DisplayName. PowerShell uses objects throughout. There is simply no need to fall back to text parsing or cryptic column indices just to get your data. There's an easier way.
The output from Get-Service that you see in the Console may look like an array (as it is formatted as a table when sent to the Console), but it is actually an 'System.ServiceProcess.ServiceController' object.
Rather than using row and column designations, you need to use the name of the property to retrieve it, so for your example:
$a[2].DisplayName will return Application Layer Gateway Service

Batch collecting in Solr PostFilter

What I want to do is exactly like this:
Solr: How to perform a batch request to an external system from a PostFilter?
and the approach I took is similar:
-don't call super.collect(docId) in the collect method of the PostFilter but store all docIds in an internal map
-call the external system in the finish() then call super.collect(docId) for all the docs that pass the external filtering
The problem I have: docId exceeds maxDoc "(docID must be >= 0 and < maxDoc=100000 (got docID=123456)"
I suspect I am storing local docIds and when Reader is changed, docBase is also changed so the global docId, which I believe is constructed in super.collect(docId) using the parameter docId and docBase, becomes incorrect. I've tried storing super.delegate.getLeafCollector(context) along with docId and call super.delegate.getLeafCollector(context).collect() instead of super.collect() but this doesn't work either (got a null pointer exception)
Look at the code for the CollapsingQParserPlugin in the Solr codebase, particularly CollapsingScoreCollector.finish.
The docId's you receive in the collect call are not globally unique. The Collapsing collector makes them unique by adding the docBase from the context to the local docId to create a globalDoc during the collect() phase.
Then in the finish() phase, you must find the context containing the doc in question and set the reader/leafDelegate depending on what version of Solr your running. Specifying the right docId with the wrong context will throw Exceptions. For the Collapsing collector, you iterate through the contexts until you find the first docBase smaller than the globalDoc.
Finally, if you added docBase in collect(), don't forget to subtract docBase in finish() when you call collect() on the appropriate DelegationCollector object, as the author may or may not have done the first time.

How to ignore errors in datastore.Query.GetAll()?

I just started developing a GAE app with the Go runtime, so far it's been a pleasure. However, I have encountered the following setback:
I am taking advantage of the flexibility that the datastore provides by having several different structs with different properties being saved with the same entity name ("Item"). The Go language datastore reference states that "the actual types passed do not have to match between Get and Put calls or even across different App Engine requests", since entities are actually just a series of properties, and can therefore be stored in an appropriate container type that can support them.
I need to query all of the entities stored under the entity name "Item" and encode them as JSON all at once. Using that entity property flexibility to my advantage, it is possible to store queried entities into an arbitrary datastore.PropertyList, however, the Get and GetAll functions return ErrFieldMismatch as an error when a property of the queried entities cannot be properly represented (that is to say, incompatible types, or simply a missing value). All of these structs I'm saving are user generated and most values are optional, therefore saving empty values into the datastore. There are no problems while saving these structs with empty values (datastore flexibility again), but there are when retrieving them.
It is also stated in the datastore Go documentation, that it is up to the caller of the Get methods to decide if the errors returned due to empty values are ignorable, recoverable, or fatal. I would like to know how to properly do this, since just ignoring the errors won't suffice, as the destination structs (datastore.PropertyList) of my queries are not filled at all when a query results in this error.
Thank you in advance, and sorry for the lengthy question.
Update: Here is some code
query := datastore.NewQuery("Item") // here I use some Filter calls, as well as a Limit call and an Order call
items := make([]datastore.PropertyList, 0)
_, err := query.GetAll(context, &items) // context has been obviously defined before
if err != nil {
// something to handle the error, which in my case, it's printing it and setting the server status as 500
}
Update 2: Here is some output
If I use make([]datastore.PropertyList, 0), I get this:
datastore: invalid entity type
And if I use make(datastore.PropertyList, 0), I get this:
datastore: cannot load field "Foo" into a "datastore.Property": no such struct field
And in both cases (the first one I assume can be discarded) in items I get this:
[]
According to the following post the go datastore module doesn't support PropertyList yet.
Use a pointer to a slice of datastore.Map instead.

Resources