Bulk feature flag upload to LaunchDarkly - launchdarkly

I have feature flags defined in application configuration file (JSON). Is there a way to import all the feature flags to LaunchDarkly as a bulk operation rather than creating each feature flag in the interface?

LaunchDarkly has a REST API for creating feature flags:
https://apidocs.launchdarkly.com/tag/Feature-flags#operation/postFeatureFlag
You could write a small script to upload your flag definitions from the JSON format using the format they define in the REST API docs.
Disclaimer: I work for Statsig, a feature gating and experimentation platform

Related

How to deactivate the flow using workbench API

How to deactivate/delete flow through WorkBench API or Apex? I tried all possible ways. could anyone help out here
You can use the Workbench feature and then deactivate flow through metadata when a process is installed using a managed package and that process references a custom object that isn't in the target org. In this case, the process is active and can't be edited using the Process Builder.
Resolution
Follow these steps to deactivate a process.
Log into Workbench. Read more here about Workbench.
Use Workbench to retrieve metadata components and then download those components in a .zip file.
Read about Metadata for more information.
Modify the flow definition file (contained in the .zip file you downloaded) by specifying a value of 0 in active version number.
Note that the flow definition file is available only in API version 34 and later.
https://help.salesforce.com/s/articleView?id=000338777&type=1
use the above link

Dynamic Job Creation and Submission to Flink

Hi I am planning to use flink as a backend for my feature where we will show a UI to user to graphically create event patterns for eg: Multiple login failures from the same Ip address.
We will create the flink pattern programmatically using the given criteria by the user in the UI.
Is there any documentation on how to dynamically create the jar file and dynamically submit the job with it to flink cluster?
Is there any best practice for this kind of use case using apache flink?
The other way you can achieve that is that you can have one jar which contains something like an “interpreter” and you will pass to it the definition of your patterns in some format (e.g. json). After that “interpreter” translates this json to Flink’s operators. It is done in such a way in https://github.com/TouK/nussknacker/ Flink’s based execution engine. If you use such an approach you will need to handle redeployment of new definition in your own application.
One straightforward way to achieve this would be to generate a SQL script for each pattern (using MATCH_RECOGNIZE) and then use Ververica Platform's REST API to deploy and manage those scripts: https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=sql#sql-script-artifacts
Flink doesn't provide tooling for automating the creation of JAR files, or submitting them. That's the sort of thing you might use a CI/CD pipeline to do (e.g., github actions).
Disclaimer: I work for Ververica.

Programmatically getting Apache Camel components operations, parameters, options decriptions

Is there a way to get any Apache Camel component "metadata" using Java code, like the list of options and other parameters and their types? I think some automatic help builder was mentioned somewhere that might be of use for this task without using reflection.
A way to get the registered components of all types (including data formats and languages) with java code is also sought. Thanks
Yeah take a look at the camel-catalog JAR which includes all such details. This JAR is what the tooling uses such as some of the Maven tooling itself, or IDE plugs for IntelliJ or Eclipse etc. The JAR has both Java API and metadata files embedded in the JAR you can load.
At runtime you can also access this catalog via RuntimeCamelCatalog which you can access via CamelContext. The runtime catalog is a little bit more limited than CamelCatalog as it has a view of what actually is available at runtime in the current Camel application.
Also I cover this in my book Camel in Action 2nd edition where there is a full chapter devoted on Camel tooling and how to build custom tooling etc.
This is what I've found so far
http://camel.apache.org/componentconfiguration.html

Is PAA a good candidate for automating wcm library deployment and setup in portal?

I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.

How to dynamically generate a pdf from Google's appengine?

I'd like to create an application that would run on Google's appengine.
However, this application needs to be able to generate PDFs dynamically.
How could I do this?
You can use the reportlab library to generate a PDF from Python. You can just include the ReportLab files in with your application's code, or you can include a zip archive of the ReportLab code, and insert it into your application's sys.path.
To overcome the number-of-files limit in google appengine, you could package your reportlib in a zip file and use it. Be sure you check out this issue i bumped into..
http://code.google.com/p/googleappengine/issues/detail?id=1085
Also, you can use pisa, htmllib and pyPdf to generate the pdf using html templates.
All the best.
varun
I would recommend PyFPDF, which is a pure-Python port of the lightweight yet highly powerful PHP FPDF library. It is hardly a few dozen kilobytes.
See http://code.google.com/p/pyfpdf/
Google has a new "Conversion API" that may solve all your problems. Here's a description from the site:
The App Engine Conversion API converts documents between common filetypes using Google's infrastructure for efficiency and scale. The API enables conversions between HTML, PDF, text, and image formats, synchronously or asynchronously, with an option to perform optical character recognition (OCR).

Resources