My situation
I really want to use Neo4j for a webapp I'm writing, but I'm writing the app in perl.
Besides using the REST API, what are my options for prepared statements? Ideally, I don't want to have to do any forking, and I certainly don't want to have to call an external program.
Why
I'm using prepared statements for security reasons, and a database backend for real-time efficiency + speed + ease of use. As a result, most of these solutions are, at face value, unacceptable for my needs. While I recognize that the 4j part of neo4j means "support outside of Java is probably a pipe dream", I still maintain some hope.
EDIT:
(I'll put what I find here.)
So far, I've found:
REST::Neo4p (which has documentation on prepared statement use here, if you'd rather not comb through that first link.). It's also worth noting that there's a DBD connector written to run on top of that, DBD::Neo4p.
At the moment, I've provisionally decided to use the DBD connector built atop REST::Neo4p, because it looks easy-to-use and pretty safe. Although it probably isn't as efficient as I'd want it to be, since the returned JSON under the hood will have a bunch of long link strings. And there'll be HTTP headers with each request/response.
So, for right now, I've decided to use this solution. But I'm leaving the answer unaccepted because I would welcome more lightweight alternatives. Unless the JDBC driver uses the REST API under the hood (which I doubt). In which case, I suppose it wouldn't matter, then.
Related
I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.
There is a requirement in one of the projects i am working on, that, I need to create and pass some event messages in JSON format from SQL Server to MSMQ.
I found SQL CLR could be the best way to implement this. But, some my colleagues say that it is expensive in terms of performance and memory utilization.
I am looking for a benchmark of 20 msg/sec max.
No. of messages is depending upon certain events occured in the database.
The implementation is required to run for ~10hrs/day.
Kindly suggest on how to achieve it. Code snippet / steps will be a great help.
Any other option to implement the functionality is always welcome.
Thanks.
I found SQL CLR could be the best way to implement this. But, some my colleagues say that it is expensive in terms of performance and memory utilization.
There is a lot of "it depends" in this conversation. And, most of the performance / memory / security concerns you hear are based on misinformation or simple lack of information.
SQLCLR code can be inefficient, but it can also be more efficient / faster than T-SQL in some cases. It all depends on what you are trying to accomplish, and how you approach the problem, both in terms of overall structure as well as how it is coded. For things that can be done in straight T-SQL, then it is nearly always faster in straight T-SQL. But if you place that T-SQL code in a Scalar Function / UDF, then it is no longer fast ;-). Inline T-SQL is the fastest, IF you can actually do the thing you are trying to do.
So, if you can communicate with MSMQ via T-SQL, then do that. But if you can't, then yes, SQLCLR could be efficient enough to handle this.
HOWEVER #1, regarding the need for JSON:
I do not think that any of the supported .NET Framework libraries include JSON support (but I need to check again). While it is tempting to try to load Json.NET, certain coding practices are being done in that code that are not wise to use in SQLCLR, namely using static class variables. SQLCLR is a shared App Domain so all sessions running the same piece of code will share the same memory space. So the Json.NET code shouldn't be used as-is, it needs to be modified (if possible). Some people go the easy route and just set the Assembly to UNSAFE to get passed the errors about not being able to use static class variables that are not marked as readonly, but that can lead to odd / unexpected behavior so I wouldn't recommend it. There might also be references to unsupported .NET Framework libraries that would need to be loaded into SQL Server as UNSAFE. So, if you want to do JSON, you might have to construct it manually.
HOWEVER #2, the question title is (emphasis added):
How can we write a high performance and SAFE SQLCLR assembly to create and put a json object in MSMQ?
you cannot interact with anything outside of SQL Server in an Assembly marked as SAFE. You will need to mark the Assembly as EXTERNAL_ACCESS in order to communicate with MSMQ, but that shouldn't pose any inherent problem.
I am an integration consultant and tend to use C and Lua in my spare time, unfortunately it is not my day job ;-(
Anyway, I tend to believe that a mixture of C and Lua is perfect for many "product" developments. I currently have an "adapter engine" built in pure C, but would like to actually move the adapter code to Lua....
For example, coding an EMAIL adapter in Lua is far easier than in C...yet I like the "engine speed of C"....
But now there is the big question of security risk in that the user can potentially add whatever he or she wants to the LUa scripts in production.....obviously there we could CHMOD the files...but is that really secure?
Ideally I want the C / Lua combination here....but now do I literally imbed the Lua code in the C application with a CHAR*....or do I issue a lua_dofile??
Thanks for the help
Lynton
First, one of the drawbacks to using C/Lua in production is it tends to be harder to find resources who can develop for these languages. C++ and JavaScript programmers are typically easier to find.
In terms of security, the key here is to use leading practices. Security is about risk reduction, there is no expectation one can achieve perfect security so you need to mitigate risk.
Here are my suggestions:
As with all middleware you need to use a hardened server. This is the first step, if the server is compromised using any platform you are in trouble. Middleware should NOT be in the DMZ.
You want to store the Lua code external to the compiled code (otherwise you lose the advantage of using Lua.) Make that storage as secure as you can. CHMOD is good, a secure DB server is better. The more secure the script store the more secure the system.
You can encrypt the Lua source - this is a trade off since it makes it a little harder to gain the advantage of easy updates and modification. You will probably need to implement decrypted script caching for performance.
Your security is as strong as your weakest link. If you provide a way to modify the Lua source via external access this will be an attack vector. Avoid this design if you can.
You should consider putting in change management checks. For example a separate place in the system where a checksum for each Lua file is stored. Then if an un-authorized change is made to a script you can abort functioning till the security breach is mitigated.
Other than the drawback I mentioned above, I don't think there is anything fundamentally flawed in your plan. If it can aid in making a good middleware system I would say go for it. Just mitigate the risk of your adapter scripts getting compromised as much as possible.
To expand on Donal's comment - given the popularity of node I would say that JavaScript is the the leading practice in scripted middleware right now. If you can handle learning a new scripting language I would say it would be a good idea given the support, popularity, and tools available.
Your primary requirement in terms of security is to ensure that the server cannot evaluate anything send by clients by any mechanism (not just direct evaluation, but also through supplying filenames). A lesser requirement is that they should also not have any mechanism to produce a message that allows other clients to evaluate unexpected things (i.e., avoid XSS trouble). If you can satisfy these requirements, you've got a safe server and the language(s) that it written in won't matter; using multiple languages is in fact a good idea as it lets you leverage the best of each.
It's also a good idea to use a carefully configured firewall, plenty of privilege restriction, some kind of DMZ proxy system to at least verify basic syntactic legality of messages, etc. These things are all just good practice. (Aim to configure things so the server can only just manage to do the service you want to provide.)
With sending email, there are a few other things to beware of. In particular, you do not want to be a conduit for spam, so you need to take care to ensure that arbitrary email headers cannot be constructed from client input and that the data formats you send out are non-executable (or that the data is constructed in a way that is non-evil). Rate limiting is also a good idea; unless your site is insanely popular, you shouldn't need to send more than a few messages a second across all clients. If you're ever sending only to a small set of addresses (e.g., to a fixed contact address) then you can relax these restrictions a bit (but still be careful of header injection). In all cases, route all email by a specialist email handling server instead of doing routing yourself as this avoids a whole lot of configuration difficulties.
I want to develop a web application using ANSI C. Since, I want have to have the application to be fast enough than others and also it should support the all kind of operations as the normal scripting php, python or any scripting language provides. Even if you have idea for fastest access with database rather than C, please recommend anything better
If anyone have idea please share the tutorials to start.
I'm not aware of any C web application frameworks, and so if you did wish to write your application in C you will need to handle all communication between your application / framework and the web server through a web server interface - the easiest starting point for understanding this is probably to read up on CGI, however once you understand how CGI works you will want to move onto understanding FastCGI instead, as although FastCGI is more complex, CGI is notoriously slow.
However I strongly recommend that you don't bother unless you are attempting this for academic purposes!:
The path you are suggesting involves low level stuff - its interesting, but a lot of work to achieve things that can be done incredibly easily in any half-decent web application framework.
With web applications is that the thing that matters is throughput (number of requests you can handle in a given time period), not speed (the time it takes to process a single request) - it might seem that a web site written in C would be much faster, however in reality the execution speed of C counts for incredibly little vs (for example) Caching and other optimisations.
Other frameworks already exist that are proven and lightning fast!
The end result is that anything that you come up with will almost certainly be more work and slower than using "slow" scripting languages.
Any kind of 'scripting' won't give you the 'raw speed' it seems you might be looking for.
I would generally strongly discourage this whole train of thought, though. There are plenty of web frameworks out there where you produce code that runs very efficiently. Even 'scripted' web frameworks often cache the scripts and reduce much of the initial slowdown involved in parsing and executing.
And frameworks that use compiled bytecode/IL can be quite fast once loaded/JIT'ed.
If you plan to write your own HTTP engine in C, though; I doubt you would be able to get something remotely close to as fast as anything else out there until you were very familiar with what's already out there; how they all work, all the variations in the protocols involved, etc, etc...
I've heard a lot of good things about FastCGI. Maybe you should try that?
You should checkout g-wan by trustleap. It allows you to write servlets in
ansi-c, taking care of all the nitty gritty regarding the http protocol.
http://www.trustleap.com/
I'm trying to create an XMPP library (and later a server) from scratch in Go (although the language itself is irrelevant) as a means to learn what I can about the XMPP protocol and server software development in general.
As many of you know, XMPP is messaging protocol based on XML that depends on an enormous amount of short but frequent XML streams. I'm thinking that for such applications an event based XML parser should be better because I won't need DOM and all that (correct me if I'm wrong). Please keep in mind that this library is intended for servers so there might be many instances run at once;
Which one of the two has better performance and memory usage for that use case, libxml2 or expat?
There is a whole project devoted to answering the question of XML performance called XML Benchmark.
The short answer, in my opinion, is to use libxml2, but I have other considerations beyond pure performance, such as platform availability. That said, it is generally faster than expat according to the latest numbers, though it's fairly close in the great scheme of things.
And yes, you want to use the SAX parser, not the DOM parser.