I have seen references to 'zone' in the MsgPack C headers, but can find no documentation on what it is or what it's for. What is it? Furthermore, where's the function-by-function documentation for the C API?
msgpack_zone is an internal structure used for memory management & lifecycle at unpacking time. I would say you will never have to interact with it if you use the standard, high-level interface for unpacking or the alternative streaming version.
To my knowledge, there is no detailed documentation: instead you should refer to the test suite that provides convenient code samples to achieve the common tasks, e.g. see pack_unpack_c.cc and streaming_c.cc.
From what I could gather, it is a move-only type that stores the actual data of a msgpack::object. It very well might intended to be an implementation detail, but it actually leaks into users' code sometimes. For example, any time you want to capture a msgpack::object in a lambda, you have to capture the msgpack::zone object as well. Sometimes you can't use move capture (e.g. asio handlers in some cases will only take copyable handlers, or your compiler doesn't support the feature). To work around this, you can:
msgpack::unpacked r;
while (pac_.next(&r)) {
auto msg = result.get();
io_->post([this, msg, z = std::shared_ptr<msgpack::zone>(r.zone().release())]() {
// msg is valid here
}));
}
Related
I'm creating a simple FMI demo system to try out FMI where I have 1 simulator connected to an FMU which computes the state of the system (represented as a number calculated from a closed-form equation) and another FMU that controls the system via a parameter in the closed-form equation. So the system looks something like
FMU-system <--> Simulator <--> FMU-control
In every iteration, I'm updating the system state based on 1 equation, and passing it to the control, which returns a parameter to be passed to the system.
I'm using FMI 2.0.3, and have read the specification. Right now I have 3 files, 1 to act as a simulator and 2 to act as the FMUs. But I'm having difficulties with the implementation of the FMUs and the initialisation of the simulator.
To initialise the FMU, my understanding is I need to call fmi2Instantiate which has this signature.
fmi2Component fmi2Instantiate(fmi2String instanceName, fmi2Type fmuType, fmi2String fmuGUID, fmi2String fmuResourceLocation, const fmi2CallbackFunctions* functions, fmi2Boolean visible, fmi2Boolean loggingOn);
But I don't know what to pass in the function for the GUID, resource location and callback function. How should I implement the callback function and initialisation?
Then to implement the FMU, my understanding is I need to implement fmi2SetReal, fmi2GetReal and fmi2DoStep, but I can't figure out how to implement them in terms of code. These are the signatures
fmi2Status setReal(fmi2Component c, fmi2ValueReference vr[], size_t nvr, fmi2Real value[])
fmi2Status getReal(fmi2Component c, fmi2ValueReference vr[], size_t nvr, fmi2Real value[])
fmi2Status doStep(fmi2Component c, fmi2Real currentCommunicationPoint, fmi2Real communicationStepSize, fmi2Boolean noSetFMUStatePriorToCurrentPoint)
But I can't figure out how to implement these functions. Is fmi2Component c meaningless here? And I suppose I have to do the system state computation for the FMU-system in doStep. How should I update the state and pass the code here?
Sorry if this is too many questions but I was trying to look for a tutorial too and I couldn't find any.
https://github.com/traversaro/awesome-fmi
This is a curated list of Functional Mock-up Interface (FMI) libraries, tools and resources.
There are non commercial tools available. Check them out, you will get idea to implement these functions for your application.
A good starting point to implement FMI support are the open source Reference FMUs (which recently also got a simple FMU simulator) and fmpy:
https://github.com/CATIA-Systems/FMPy
https://github.com/modelica/Reference-FMUs/tree/main/fmusim
I am tasked to assist with the design of a dynamic library (exposed with a C interface) aimed to be used in embed software application on various embed platform (Android,Windows,Linux).
Main requirements are speed , and decoupling.
For the decoupling part : one of our requirement is to be able to facilitate integration and so permit backward compatibility and resilience.
My library have some entry points that should be called by the integrating software (like an initialize constructor to provide options as where to log, how to behave etc...) and could also call some callback in the application (an event to inform when task is finished).
So I have come with several propositions but as each of one not seems great I am searching advice on a better or standard ways to achieve decoupling an d backward compatibility than this 3 ways that I have come up :
First an option that I could think of is to have a generic interface call for my exposed entry points for example with a hashmap of key/values for the parameters of my functions so in pseudo code it gives something like :
myLib.Initialize(Key_Value_Option_Array_Here);
Another option is to provide a generic function to provide all the options to the library :
myLib.SetOption(Key_Of_Option, Value_OfOption);
myLib.SetCallBack(Key_Of_Callbak, FunctionPointer);
When presenting my option my collegue asked me why not use a google protobuf argument as interface between the library and the embed software : but it seems weird to me, as their will be a performance hit on each call for serialization and deserialization.
Are there any more efficient or standard way that you coud think of?
You could have a struct for optional arguments:
typedef struct {
uint8_t optArg1;
float optArg2;
} MyLib_InitOptArgs_T;
void MyLib_Init(int16_t arg1, uint32_t arg2, MyLib_InitOptArgs_T const * optionalArgs);
Then you could use compound literals on function call:
MyLib_Init(1, 2, &(MyLib_InitOptArgs_T){ .optArg2=1.2f });
All non-specified values would have zero-ish value (0, NULL, NaN), and would be considered unused. Similarly, when passing NULL for struct pointer, all optional arguments would be considered unused.
Downside with this method is that if you expect to have many new arguments in the future, structure could grow too big. But whether that is an issue, depends on what your limits are.
Another option is to simply have multiple smaller initialization functions for initializating different subsystems. This could be combined with the optional arguments system above.
I'm wrapping a C struct in a Ruby C extension but I can't find the differente between Data_Wrap_Struct and TypedData_Wrap_Struct in the docs, what's the difference between the two functions?
It's described pretty well in the official documentation.
The tl;dr is that Data_Wrap_Struct is deprecated and just lets you set the class and the mark/free functions for the wrapped data. TypedData_Wrap_Struct instead lets you set the class and then takes a pointer to a rb_data_type_struct structure that allows for more advanced options to be set for the wrapping:
the mark/free functions as before, but also
an internal label to identify the wrapped type
a function for calculating memory consumption
arbitrary data (basically letting you wrap data at a class level)
additional flags for garbage collection optimization
Check my unofficial documentation for a couple examples of how this is used.
I want to print out some attributes of video frames: I've looked into AVFrame struct, but only found the following disappointments:
attribute_deprecated short * dct_coeff
attribute_deprecated uint32_t * mb_type
It seems to me everything I am interested in is already obsolete. Btw, I didn't find
int16_t(*[2] motion_val )[2]
attribute in the actual frame I captured. My question is: how can i get access to those attributes such as dct_coeff or motion_vector or mb_type of a frame at all?
See av_frame_get_side_data(frame,AV_FRAME_DATA_MOTION_VECTORS) for the motion vectors. The other two have no replacement. The documentation states that they're mpeg-specific and using internal implementation details, which is why no replacement was provided.
(Don't forget to set avctx->flags2 & AV_CODEC_FLAG2_EXPORT_MVS, otherwise it's not exported.)
For the two with no replacement, I understand you might want that type of information if you're e.g. writing a stream analyzer, but FFmpeg really doesn't provide a stream-analyzer-level API right now. They could - if there's a more generic API - obviously be added as a separate side-data type. If you want that, you should probably become a FFmpeg developer and work on a broader API that is not MPEG-specific (e.g. does not use internal macros for mb_type), possibly even implement it for other codecs. In any other case, I don't really see why you would want that information. Can you elaborate?
I follow the example of
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/libs/ml/multiple_linear_regression.html
but in the example the fit function only need one param,but in my code , fit require three params,
mlr.fit(training, fitParameters, fitOperation);
I thought fitParameters may be a alternative for setIterations(),setStepsize()
but what is fitOperation?
The fitOperation parameter is actually an implicit parameter which is filled in automatically by the Scala compiler. It encapsulates the MLR logic.
Since your fit function has 3 parameters, I suspect that you're using FlinkML with Flink's Java API. I would highly recommend you using the Scala API, because otherwise you will have to construct the ML pipelines manually. If you still want to do it, then take a look at the FitOperations defined in the MultipleLinearRegression companion object.