good day!
i have small question about reload c module in tarantool
for example: i have c module which expose a method:
int calculate(lua_State* L);
in addition i declared entry point:
extern "C"
{
LUA_API int luaopen_cuendemodule(lua_State *L);
}
now, i load this module ("testmodule.so") in tarantool:
require('testmodule')
box.schema.func.create('testmodule.calculate')
box.schema.user.grant('user', 'execute', 'function', 'testmodule.calculate')
and now i call this method from my c# client:
await tarantoolClient.Call<TarantoolTuple<CalculateParameters>, CalculationResults>("testmodule.calculate", TarantoolTuple.Create(....));
and it is work as expected - method calculate executed and results was returned
but if i want ещ update my module than the problems begin: after i replace so file and call calculate method my tarantool restart and i can see something like "tarntool invalid opcode in testmodule.so" in dmesg
after reading documentation i see additional parameters in function definition like this:
box.schema.func.create('testmodule.calculate', {language = 'C'})
but after this if i call it from c# i receive exception with message "failed to dynamically load function undefined symbol calculate"
i use tarantool 1.7 on ubuntu
my so compiled with gcc 8.1.0
There is no a good way to do this (means - a portable way). I think, the best way is using something like dlopen()[1], the function allows to open (manage shared objects in general) a shared object, but you have to be very cautious about this, it may fail your code as well or even you can have sigfault.
A good example is: https://github.com/tarantool/mqtt, the module does not use these functions (func.create and so on), but it could be extended as well.
So the point is: if you develop a C-module, you have to think about reloading policy.
For instance, *-unix like systems have alot of features which allows to reload some shared objects and also tarantool has some features too.
And also, I suggest to you, start think about modules as like about Lua's C-modules, it is actually the same.
PS
Some reload modules also available: https://github.com/Mons/tnt-package-reload (I didn't test the module), https://github.com/tarantool/reload (I didn't test the module)
[1] http://man7.org/linux/man-pages/man3/dlopen.3.html
I think, you can try this module for reload your application -https://github.com/moonlibs/package-reload.
Related
I'm starting to use TDD for writing embedded C software and I'm using Google Test as my testing framework. I just realized a situation that doesn't seem to be covered on any mocking tutorials: I want to count how many times a given REAL function has been called.
So, let's say that I'm developing some code that uses a library called LIB_A, which in turn uses another library called LIB_B.
normally, I would mock LIB_B and have a test like so:
TEST(MyCodeTest, CanDoSomething) {
Mock_LIB_B_Class mock_object;
MyClass my_obj;
// We expect that doSomething will call SomeMethod at least once
EXPECT_CALL(mock_class, SomeMethod()).Times(AtLeast(1));
// Checks for the expected return
EXPECT_EQ(0, my_obj.doSomething());
}
OK, that's all fine and dandy. Now here's my question: what if I don't have to mock LIB_A, but mock LIB_B. how can I count the number of times SomeMethod gets called? Because mocking frameworks make it easy to create mock functions that don't actually have a real implementation.
I'm thinking that I could use a fake for LIB_A, so the calls would be countable. I'm thinking about using either Google Mock or Fake Function Framework.
Thanks!
You don't need any mock framework to accomplish this. You can use gcov/lcov/genhtml.
By including the gcc flags -fprofile-arcs and -ftest-coverage, and the linker flag -lgcov, your executable will generate runtime information about which lines of code were executed and by using the aforementioned tools you can easily generate a set of html files which will show you a list of functions including their call count.
After executing your test, do:
gcov maintest.c
lcov --capture --directory . --output-file maintest.info
genhtml maintest.info --output-directory html
Then open index.html, choose a file and click the functions button in the top bar, next to the file name. It will look something like this:
I'm working on a a large scale angular project with a team of devs.
the problem we run into is if you have several files for a component, say a directive.
some-directive.js
some-directive-controller.js
in the definition of both files you would have to attach them to a module, however one file must create the module with []. If the developer forgets to poke around they will add [] in the second file called will actually overwrite the module. so now it becomes a memory game. Each developer has to remember to only declare the module in one file []
some-directive.js
angular.module('some-module',['some-dependencies']).directive('some-directive',function(){});
some-controller.js
angular.module('some-module',[]).controller('some-controller',function(){});
we have been using the following approach. Is there a better way?
some-directive.js
some-directive-module.js
some-directive-controller.js
where some-directive-module only contains the module creation, includes any dependencies, and does any .config needed. Still the dev needs to remember to
angular.module('some-directive') in all the other files without the square brackets.
some-directive-module.js
angular.module('some-directive',[])
.config(//someconfig stuff);
some-directive-module.js
angular.module('some-directive).directive(//declare directive);
some-directive-controller.js
angular.module('some-directive).controller(//declare contrller used by directive);
I suggested that instead we should do the following, it eliminates the issue of overwriting modules, but I received some negative feedback from one of the other devs
some-directive-module.js
angular.module('some-directive',['some-directive.directive','some-directive.controller'])
.config(//someconfig stuff);
some-directive-module.js
angular.module('some-directive.directive',[]).directive(//declare directive);
some-directive-controller.js
angular.module('some-directive.controller',[]).controller(//declare contrller used by directive);
Is there a better way? Or is one of the above options correct?
The recommended way (by multiple competent people) is to use the setter-getter-syntax (creating once with angular.module("someModule",[]) and accessing with angular.module("someModule") from there on). Putting the module definition and configuration into one file seems very clean and is common practice between a lot of developers. But make sure not to create a module for every single directive - group services, directives, constants and so on into reasonable functionality-modules instead.
Making clear what a file contains by its name is also a good idea in my opinion, so your some-directive-module.js approach seems fine to me. If developers "poke around" and "wildly add []", they should get a slap on the wrist follwoed by an explanation how modules work in angular, so they stop doing it ;-)
One of the projects I'm collaborating on has four different modules (Foo, Bar, Baz, and Plotting) and I've been tasked with combining them into a package. It is simple enough in Julia to make a new package:
julia> Pkg.generate("MyPackage", "MIT")
I copied my modules into the ~/.julia/v0.3/MyPackage/src/ and added include statements to MyPackage.jl. It looks something like this:
module MyPackage
include("foo.jl")
include("bar.jl")
include("baz.jl")
include("plotting.jl")
end
Each included file contains the corresponding module.
My main problem with this is Plotting takes forever to import and it's not needed very often when we're using the rest of MyPackage. I'd really like to be able to do something like using MyPackage.Foo to just get Foo (and particularly to exclude the Plotting and its slow import time). I've tried a couple different approaches for how I structure things, including having sub-modules explicitly defined inside MyPackage.jl instead of in each file individually, but no matter what I try, I always get the loading lag from Plotting.
Is it possible build a package so you can independently load modules from it? and if so, how?
Note: I'm new to Julia and newer still to building packages. Sorry if any of my semantics are wrong or anything is unclear.
Try Requires.jl:
Requires is a Julia package that will magically make loading packages faster, maybe. It supports specifying glue code in packages which will load automatically when a another package is loaded, so that explicit dependencies (and long load times) can be avoided.
Is it possible build a package so you can independently load modules from it? and if so, how?
Following the advice of this comment has worked for me:
https://discourse.julialang.org/t/multiple-modules-in-single-package/5615/7?u=nhdaly
You can change the top-level package-named Module to simply just expose the other four Modules as follows:
# <julia_home>/MyPackage/src/MyPackage.jl
module MyPackage
push!(LOAD_PATH, #__DIR__) # expose all other modules defined in this directory.
end
Then to import the other modules, say Bar, the user code would do:
# code.jl
using MyPackage; using Foo;
...
But it's worth noting that, then, Foo, Bar, Baz and Plotting are all also treated as top-level modules, so you'll want to make their names unique so they don't conflict with other Packages/Modules. (ie somethig like MyPackageFoo, not Foo.)
Traditionally I have managed my Angular code like this
//File 1
angular.module('name',[])
//File 2
function TestController(){
}
TestController.prototype.// inherited stuff
angular.module('name').controller('testController',TestController);
This worked great and allowed me to partition my files easily. Now I try to upgrade to 1.3 and get the infamous...
Error: [ng:areq] Argument 'TestController' is not a function, got undefined
Of course this is due to this change which claims a desire to clean up the way people write code. What about this pattern is more complex? Is there a way to maintain this pattern without changing the global settings?
There is actually a comment on the page you linked to that had a fairly solid explanation.
Global controllers refer to your controllers being defined as function
on the window object. This means that they are openly available to
conflict with any other bit of JavaScript that happens to define a
function with the same name. Admittedly, if you post-fix your
controllers with ...Controller then this could well not happen but
there is always the chance, especially if you were to use a number of
3rd party libraries. It is much safer to put these controller
functions inside the safety of a module. You then have more control
over when and where this module gets loaded. Unfortunately controller
names are global across an individual Angular app and so you still
have the potential for conflict but at least you can't clash with
completely different code in the JavaScript global namespace.
So the idea is that global controller functions could conflict with any other global function in any javascript you use. So to eliminate the chance of a conflict with your own code or a third-party script, not using global controllers makes your code safer and more consistent.
As mentioned in the comments by #Brett, you can use IIFE around your prototyping. Here is an update of your plunk that uses that. The main change just looks like this.
(function() {
TestController.prototype.name = 'World'
})();
What comes to my mind is 2 things:
1) in that way functions wont be kept in memory more than they should.
2) if you minify your code, minifyer will have to generate new names for all global objects, which is sfine when you have small project, but will be a problem when it's not.
Also it should prevent tests to modify unnecessary data.
I writing a module for Drupal 7. It must get current $node variable in one function, e.g. hook_init() or similar, and this variable should be accessible from another function. I found discussion about node_load(arg(1)) and the author of solution said that performance is no concern because of caching.
So, should I call node_load() in every function I need not being afraid of performance issues of are there any other ways to pass variables between module functions?
To be more exact: I've placed a block with module on pages with a certain node types and in hook_block_view function I need to access to current node object. I'm succeeded using node_load so far. But maybe there is a better way.
Thanks in advance!
In drupal 7 node_load() calls entity_load(), which caches the results, so if you call node_load() twice, it will be returned from cache.
Using node_load() is better solution than passing global variables between modules, especially if you use some third-party caching modules (APC, memcached or other). If you care about performance just try to use one of these modules (I prefer memcached).