Suppose that I have an app written in C that must parse some blob data. The parsing process is done by an entire dedicated library (I have access to this library code).
Since this blob is versioned I need to support let's say 2 versions of it at a time. Workflow would be something like:
'v1' is out -> library supports 'v1'
'v2' is out -> library now supports 'v2' and 'v1'
'v3' is out -> library now supports 'v3' and 'v2'
and so on.
The main problem is that the majority of symbols in the library for 'v2', for example, are also present in 'v1' library, with the same function prototype.
First thought was use something like namespaces in C++, in order to have something like:
/app/src/lib_v1/parser.c
void _parse_blob(char* blob){ //data parsing code for v1}
/////////////////////////////
/app/src/lib_v2/parser.c
void _parse_blob(char* blob){ //data parsing code for v2}
////////////////////////////
/app/src/main.c
//Pseudo-code
char* data;
if(_check_version(data) == 'v1')
parser.v1._parse_blob(data);
else
parser.v2._parse_blob(data);
But I don't know if any similar can be achieved in C without changing anything in the code of the 'outgoing' library (v1 in this case) since all that code has already been tested and modifying it would invalidate all the release tests.
Other idea would be separate both libraries code into two dynamic linked libraries and load/unload them when necessary, but don't know if could work or if it is efficient.
What would be the best approach to this problem ?
Related
I am working on a desktop application using nim's webgui package, which sort of works like electron in that it renders a gui using HTML + CSS + JS. However, instead of bundling its own browser and having a backend in node, it uses the browser supplied by the OS (Epiphany under Linux/GNOME, Edge under Windows, Safari under iOS) and allows writing the backend in nim.
In that context I am basically writing an SPA in Angular and need to load in the HTML, JS and CSS files at compile-time into my binary.
Reading from a known absolute filepath is not an issue, you can use nim's staticRead method for that.
However, I would like to avoid having to adjust the filenames in my application code all the time, e.g. when a new build of the SPA changes a file name from main.a72efbfe86fbcbc6.js to main.b72efbfe86fbcbc6.js.
There is an iterator in std/os that you can use at runtime called walkFiles and walkPattern, but these fail when used at compileTime!
import std/[os, sequtils, strformat, strutils]
const resourceFolder = "/home/philipp/dev/imagestable/html" # Put into config file
const applicationFiles = toSeq(walkFiles(fmt"{resourceFolder}/*"))
/home/philipp/.choosenim/toolchains/nim-#devel/lib/pure/os.nim(2121, 11) Error: cannot 'importc' variable at compile time; glob
How do I get around this?
Thanks to enthus1ast from nim's discord server I arrived at an answer: using the collect macro with the walkDir iterator.
The walkDir iterator does not make use of things that are only available at runtime and thus can be safely used at compiletime. With the collect macro you can iterate over all your files in a specific directory and collect their paths into a compile-time seq!
Basically you start writing collect-block, which is a simple for-loop that at its end evaluates to some form of value. The collect macro will put them all into a seq at the end.
The end result looks pretty much like this:
import std/[sequtils, sugar, strutils, strformat, os]
import webgui
const resourceFolder = "/home/philipp/dev/imagestable/html"
proc getFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = collect:
for path in walkDir(folder):
if path.path.endswith(fmt".{fileEnding}"): path.path
proc readFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = getFilesWithEnding(folder, fileEnding).mapIt(staticRead(it))
Context: I am attempting to automate the inspection of eps files to detect a list of attributes, such as whether the file contains locked layers, embedded bitmap images etc.
So far we have found some of these things can be detected via inspection of the raw eps file data and its accompanying metadata (similar to the information returned by imagemagick.) However it seems that in files created by illustrator 9 and above the vast majority of this information is encoded within the "AI9_DataStream" portion of the file. This data is encoded via ascii85 and compressed. We have found some success in getting at this data by using: https://github.com/huandu/node-ascii85 to decode and nodes zlib library to decompress / unzip. (Our project is written in node / javascript). However it seems that in roughly half of our test cases / files the unzipping portion fails, throwing Z_DATA_ERROR / "incorrect data check".
Our method responsible for trying to decode:
export const decode = eps =>
new Promise((resolve, reject) => {
const lineDelimiters = /\r\n%|\r%|\n%/g;
const internal = eps.match(
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
);
const hasDataStream = internal && internal.length >= 2;
if (!hasDataStream) resolve('');
const encoded = internal[2].replace(lineDelimiters, '');
const decoded = ascii85.decode(encoded);
try {
zlib.unzip(decoded, (err, buffer) => {
// files can crash this process, for now we need to allow it
if (err) resolve('');
else resolve(buffer.toString('utf8'));
});
} catch (err) {
reject(err);
}
});
I am wondering if anyone out there has had any experience with this issue and has some insight into what might be causing this and whether there is an alternative avenue to explore for reliably decoding this data. Information on this topic seems a bit sparse so really anything that could get us going in the right direction would be very much appreciated.
Note: The buffers produced by the ascii85 decoding all have the same 78 9c header which should indicate standard zlib compression (and it does in fact decompress into parsable data about half the time without error)
Apparently we were misreading something about the ascii85 encoding. There is a ~> delimiter at the end of the encoded block that needs to be omitted from the string before decoding and subsequent unzipping.
So instead of:
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
Use:
/(%AI9_DataStream)([\s\S]*?)(~>)/
And you can get to the correct encoded / compressed data. So far this has produced human readable / regexable data in all of our current test cases so unless we are thrown another curve that seems to be the answer.
The only reliable method for getting content from PostScript is to run it through a PostScript interpreter, because PostScript is a programming language.
If you stick to a specific workflow with well understood input, then you may have some success in simple parsing, but that's about the only likely scenario which will work.
Note that EPS files don't have 'layers' and certainly don't have 'locked' layers.
You haven't actually pointed to a working example, but I suspect the content of the AI9_DataStream is not relevant to the EPS. Its probably a means for Illustrator to include its own native file format inside the EPS file, without it affecting a PostScript interpreter. This is how it works with AI-produced PDF files.
This means that when you reopen the EPS file with Adobe Illustrator, it ignores the EPS and uses the embedded native file, which magically grants you the ability to edit the file, including features like layers which cannot be represented in the EPS.
How to use SteamAPICall_t with a SteamLeaderboard_t handle with LuaJIT FFI?
I use LÖVE2D framework & Steamworks Lua Integration (SLI)
Links: FindLeaderboard
/ UploadLeaderboardScore
/ Typedef
function UploadLeaderboards(score)
local char = ffi.new('const char*', 'Leaderboard name')
local leaderboardFound = steamworks.userstats.FindLeaderboard(char) -- Returns SteamAPICall_t
local leaderboardCurrent = ?? -- Use SteamAPICall_t with typedef SteamLeaderboard_t somehow.
local c = ffi.new("enum SteamWorks_ELeaderboardUploadScoreMethod", "k_ELeaderboardUploadScoreMethodKeepBest")
score = ffi.cast('int',math.round(score))
return steamworks.userstats.UploadLeaderboardScore(leaderboardCurrent, c, score, ffi.cast('int *', 0), 0ULL)
end
leaderboardCurrent = ffi.cast("SteamLeaderboard_t", leaderboardFound) -- No declaration error
SteamAPICall_t is simply a number that corresponds to your request.
This is meant to be used alongside CCallback in the steam API.
The lua integration misses out CCallback and STEAM_CALLBACK.
The SteamLeaderboard_t response is generated by calling FindLeaderboard.
In this case you are making a request to steam and steam needs to respond in an asynchronous way.
So what you have to do is define a Listener object ( in C++ ) that will listen for the response ( which will be in form of SteamLeaderboard_t) and write C-like functions for it so ffi can understand them.
This means that your program must be able to do this:
Register a listener for the leaderboard.
Submit a request for a leaderboard. ( FindLeaderboard )
Wait for message ( SteamLeaderboard_t )
Use SteamLeaderboard_t
In short you will need to write code in C++ for the events and add C-like interface for them and compile it all into a DLL then link that DLL to lua using FFI. This can be tricky so exercise caution.
in C (ffi.cdef and dll):
//YOU have to write a DLL that defines these
typedef struct LeaderboardEvents{
void(*onLeaderboardFound)(SteamLeaderboard_t id);
} LeaderboardEvents;
void MySteamLib_attachListener(LeaderboardEvents* events);
Then in lua.
local lib = --load your DLL library here
local Handler = ffi.new("LeaderboardEvents")
Handler.onLeaderboardFound = function(id)
-- do your stuff here.
end
lib.MySteamLib_attachListener(Handler)
While writing your DLL, I STRONGLY recommend that you read through the SpaceWar example provided by steam api so you can see how callbacks are registered.
I want to parse Swagger data from the JSON I get from {service}/swagger/docs/v1 into dynamically generated .NET class.
The problem I am facing is that different APIs can have different number of parameters and operations. How do I dynamically parse Swagger JSON data for different services?
My end result should be list of all APIs and it's operations in a variable on which I can perform search easily.
Did you ever find an answer for this? Today I wanted to do the same thing, so I used the AutoRest open source project from MSFT, https://github.com/Azure/autorest. While it looks like it's designed for generating client code (code to consume the API documented by your swagger document), at some point on the way producing this code it had to of done exactly what you asked in your question - parse the Swagger file and understand the operations, inputs and outputs the API supports.
In fact we can get at this information - AutoRest publically exposes this information.
So use nuget to install AutoRest. Then add a reference to AutoRest.core and AutoRest.Model.Swagger. So far I've just simply gone for:
using Microsoft.Rest.Generator;
using Microsoft.Rest.Generator.Utilities;
using System.IO;
...
var settings = new Settings();
settings.Modeler = "Swagger";
var mfs = new MemoryFileSystem();
mfs.WriteFile("AutoRest.json", File.ReadAllText("AutoRest.json"));
mfs.WriteFile("Swagger.json", File.ReadAllText("Swagger.json"));
settings.FileSystem = mfs;
var b = System.IO.File.Exists("AutoRest.json");
settings.Input = "Swagger.json";
Modeler modeler = Microsoft.Rest.Generator.Extensibility.ExtensionsLoader.GetModeler(settings);
Microsoft.Rest.Generator.ClientModel.ServiceClient serviceClient;
try
{
serviceClient = modeler.Build();
}
catch (Exception exception)
{
throw new Exception(String.Format("Something nasty hit the fan: {0}", exception.Message));
}
The swagger document you want to parse is called Swagger.json and is in your bin directory. The AutoRest.json file you can grab from their GitHub (https://github.com/Azure/autorest/tree/master/AutoRest/AutoRest.Core.Tests/Resource). I'm not 100% sure how it's used, but it seems it's needed to inform the tool about what is supports. Both JSON files need to be in your bin.
The serviceClient object is what you want. It will contain information about the methods, model types, method groups
Let me know if this works. You can try it with their resource files. I used their ExtensionLoaderTests for reference when I was playing around(https://github.com/Azure/autorest/blob/master/AutoRest/AutoRest.Core.Tests/ExtensionsLoaderTests.cs).
(Also thank you to the Denis, an author of AutoRest)
If still a question you can use Swagger Parser library:
https://github.com/swagger-api/swagger-parser
as simple as:
// parse a swagger description from the petstore and get the result
SwaggerParseResult result = new OpenAPIParser().readLocation("https://petstore3.swagger.io/api/v3/openapi.json", null, null);
I have a bunch of little JSON object fragments of the form:
{ id: "wow", foo: 45.4, bar: "hello, world!" }
By "a bunch" I mean about 6 GB worth. :-) (And yes, I know, this isn't technically JSON. The full story is that it originally came from YAML data, but I find that most "JSON" parsers can handle this subset of YAML just fine.)
Currently, I use Netonsoft's JSON parser with the line:
var obj = Newtonsoft.Json.Linq.JObject.Parse(json);
This works well for me, but I am porting my WinForms app to Silverlight 3.0 (and onward to 4.0 once I get the chance).
From Googling around, I see that there is some "DataContractSuperJavaScriptExSerializer2" library from Microsoft that does JSON parsing.
Should I use that library, or is there something better on the horizon? I'm 30 mins away from writing my JSON parser so that I can ensure that it is efficient, but I thought I would see if there is anything else worth looking at in the Silverlight 3 world.
Add a referece to System.Json and System.Runtime.Serialization.Json
#using System.Json;
using (var reader = new StringReader(jsonText))
{
var response = JsonValue.Load(reader) as JsonObject;
/// parse your code here
}