Formats not available for custom connector in Ververica - apache-flink

I created a custom connector based on filesystem connector (changed factoryIdentifier method to return a different identifier and also changed package name to avoid collision with the original filesystem connector).
Then I deployed my connector to Ververica platform using UI.
Custom connector works fine for format=raw however changing the format to something else (csv, json) yields error message:
csv format not found - it seems only raw format is available for my connector.
Is it possible to enable other formats for my connector or is there any work around for it?

Related

Loading TensorFlow.js model from File Server

I am trying to load Tensorflow.js model via HTTP protocol. Tensorflow.js requires me to store 'model.json' and 'weights.bin' files in the same folder. But I can only call 'model.json' as a parameter. It refers to the binary file by itself. That is how it works as far as I know.
For now, in the local environment, I am loading the model from the localhost(Http://127.0.0.1:8080) and it works fine.
However, the actual application accepts HTTPS protocol only. So I have tried to store them with models and weights in the same buckets in S3 and called via Lambda but it seems like only 'model.json' is retrieved. I am thinking of using EC2 instances where the Python Flask server is running but it seems like the same that only model.json is retrieved, not binary files.
Is there any way that I can retrieve 'model.json' with referring to the weight file? Is there anyway to host file server remotely with HTTPS protocol?
TFJS downloads model JSON, parses it and uses whatever paths are specified in the JSON - you can edit that file and set any URL you want for weights.
Alternatively, you can also use lower-level methods to load weights manually (in case you want to have a custom loader, etc.), but leave that for future until you're more comfortable with TFJS.

Logic Apps with Form Recogniser (Azure) - Error message: "ActionFailed. An action failed. No dependent actions succeeded."

I have created the logic app to read the invoice data using the form recogniser. When tested, I experienced an error. Any idea how this could be fixed?
Please see below my workflow and other details:
many thanks.
As of the official document from Microsoft, the Analyze form connector only supports PFD format and (JPEG or PNG) image formats. But, the attachment that has come from your email is an octet-stream file which is certainly not supported.
You need to ensure that you are supplying only the supported file formats.

Where to find the OSB Business service configuration details in the underlying database?

In OSB Layer when the endpoint uri is changed, I need to alert the core group that the endpoint has changed and to review it. I tried SLA Alert rules but it does not have options for it. My question is, the endpoint uri should be saved somewhere in the underlying database. If so what is the schema and the table name to query it.
URI or in fact any other part of OSB artifact is not stored in relational database but rather kept in memory in it's original XML structure. It can be only accessed thru dedicated session management API. Interfaces you will need to use are part o com.bea.wli.sb.management.configuration and com.bea.wli.sb.management.query packages. Unfortunately it is not as straightforward as it sounds, in short, to extract URI information you will need to:
Create session instance(SessionManagementMBean)
Obtain ALSBConfigurationMBean instance that operates on SessionManagementMBean
Create Query object instance(BusinessServiceQuery) an run it on ALSBConfigurationMBean to get ref object to osb artifact of your interest
Invoke getServiceDefinition on your ref object to get XML service
definition
Extract URI from XML service definition with XPath
Downside of this approach is that you are basically pooling configuration each time you want to check if anything has changed.
More information including JAVA/WLST examples can be found in Oracle Fusion Middleware Java API Reference for Oracle Service Bus
There is also a good blog post describing OSB customization with WLST ALSB/OSB customization using WLST
The information about services and all its properties can be obtained via Java API. The API documentation contains sample code, so you can get it up and running quite quickly, see the Querying resources paragraph when following the given link.
We use the API to read the service (both proxy and business) configuration and for simple management.
As long as you only read the properties you do not need to handle management sessions. Once you change the values, you need to start a session and activate it once you are done -- a very similar approach to Service bus console.

Relay JS only on client side

I have developed a graphql server with laravel (https://github.com/Folkloreatelier/laravel-graphql). Now I would like to create a React application by using Relay. When I create my first component, then I receive the error:
Uncaught Error: Invariant Violation: RelayQL: Unexpected invocation at runtime. Either the Babel transform was not set up, or it failed to identify this call site. Make sure it is being used verbatim as Relay.QL.
I still googled the error and I recognized that I need to install the Babel Relay Plugin. But to include this plugin I have to specify a schema. But I still have specified this schema on server side, why do I need this also in client side? Is there any example how to implement this when a server is still implemented (external server, e.g. no NodeJS server).
Thank you for your advice.
The schema for your data on the server-side is required by babel-relay-plugin to do transpilation of Relay.QL fragments in Javscripts that will be delivered to the client-side. However, note that on your GraphQL server, you define the schema in Laravel GraphQL PHP classes, which you need to convert it into JSON format that babel-relay-plugin expects.
For example, I did a similar setup with Rails and graphql-ruby (https://github.com/nethsix/relay-on-rails).
Define the data schema on my server-side using graphql-ruby classes in app/graph directory
Convert the schema into a JSON file using a script lib/tasks/graphql.rake, which I then stash into app/assets/javascripts/relay/data/schema.json
Point to the schema.json in your babelRelayPlug.js file wherever it is (mine is in assets/javascripts/relay/utils/babelRelayPlugin.js)
For Rails, we can easily dump the graphql-ruby schema to json by just calling #to_json method. You may have similar methods in PHP.
I compared the difference between setting up Relay on a nodejs server, vs. a non-nodejs server using an illustration here if it helps (https://medium.com/#khor/relay-facebook-on-rails-8b4af2057152)

EFCodeFirst 4.2 and Provider Manifest tokens

I have a library that I have created that depends on EF Codefirst for DB interaction. I am also using EntityMigrations Alpha 3. When I use the library in my main application (WPF) everything works fine and as expected. Another part of the system uses Excel and retrieves information using the same library via an additional COM class in between.
In the Excel scenario, as soon as it tries to connect to the database, it throws up an exception to do with "The Provider did not return a ProviderManifestToken".
I'm really not sure why I'm only getting the error when I go through Excel/COM. In both scenarios I can confirm that the same DB connection string is being used. THe method to retrieve the DB Connection string is also the same - they use a shared config file & loader class.
Any suggestions welcome.
Issue resolved.
I had also created a custom DBIntializer and part of the intialization calls upon EntityMigrations to ensure the DB is up to date. The custom migration calls the default constructor on your context. By convention this will either dynamically use it's own connection string for SQLExpress(I don't have installed) or try to look for an entry in your config file (I don't have this either for the dll - config comes from hosting apps).
This is what is causing the failure when being used from Excel(In my scenario). The Migration will be newing up an instance of the context using the default constructor. This means that a config entry for the connection string is required or it uses the default process(SQLExpress). When being used from Excel in a COM env – no config file exists.
Moving the migration out of the Initialization strategy means I no longer have a problem.

Resources