Web2py and/or Cappuccino (Objective-J) for CPU intensive tasks - google-app-engine

I started a CPU-intensive project in web2py which at some point needs a dynamic GUI interface. Due to the intensive nature of the processing, I am unsure whether embedding this python code (in web2py views) would be price-sensitive as deployed to GAE (GoogleAppEngine) with its server-side processing, at least compared to using another framework, etc. Correct me if I am misinterpreting how the code will be processed or if there is an easy way around this (a separate or integral client-side processing implementation, if possible))
Due to the lack of information in cross-referencing Web2py with Cappuccino, I assume combining their mechanisms would be an untenable prospect.

Related

NVIDIA Triton vs TorchServe for SageMaker Inference

NVIDIA Triton vs TorchServe for SageMaker inference? When to recommend each?
Both are modern, production grade inference servers. TorchServe is the DLC default inference server for PyTorch models. Triton is also supported for PyTorch inference on SageMaker.
Anyone has a good comparison matrix for both?
Important notes to add here where both serving stacks differ:
TorchServe does not provide the Instance Groups feature that Triton does (that is, stacking many copies of the same model or even different models onto the same GPU). This is a major advantage for both realtime and batch use-cases, as the performance increase is almost proportional to the model replication count (i.e. 2 copies of the model get you almost twice the throughput and half the latency; check out a BERT benchmark of this here). Hard to match a feature that is almost like having 2+ GPU's for the price of one.
if you are deploying PyTorch DL models, odds are you often want to accelerate them with GPU's. TensorRT (TRT) is a compiler developed by NVIDIA that automatically quantizes and optimizes your model graph, which represents another huge speed up, depending on GPU architecture and model. It is understandably so probably the best way of automatically optimizing your model to run efficiently on GPU's and make good use of TensorCores. Triton has native integration to run TensorRT engines as they're called (even automatically converting your model to a TRT engine via config file), while TorchServe does not (even though you can use TRT engines with it).
There is more parity between both when it comes to other important serving features: both have dynamic batching support, you can define inference DAG's with both (not sure if the latter works with TorchServe on SageMaker without a big hassle), and both support custom code/handlers instead of just being able to serve a model's forward function.
Finally, MME on GPU (coming shortly) will be based on Triton, which is a valid argument for customers to get familiar with it so that they can quickly leverage this new feature for cost-optimization.
Bottom line I think that Triton is just as easy (if not easier) ot use, a lot more optimized/integrated for taking full advantage of the underlying hardware (and will be updated to keep being that way as newer GPU architectures are released, enabling an easy move to them), and in general blows TorchServe out of the water performance-wise when its optimization features are used in combination.
Because I don't have enough reputation for replying in comments, I write in answer.
MME is Multi-model endpoints. MME enables sharing GPU instances behind an endpoint across multiple models and dynamically loads and unloads models based on the incoming traffic.
You can read it further in this link

Logic app or Web app?

I'm trying to decide whether to build a Logic App or a Web App.
It has to do things I'm quite comfortable doing in C#: receive messages in various formats (a few thousand per day), translate them, make API calls and forward them. None of the endpoints are widely used, so the out-of-the-box connectors won't be a benefit. Some require custom headers, the contents of which are calculated using a hashing algorithm. Some of the work involves converting Json into XML and vice-versa.
From what I've read, one of the key points of difference of Logic Apps are that you don't have to write any code. Since our organisation is actually quite comfortable with code, that doesn't feel like it'll actually be a benefit.
Am I missing something? Are there any compelling reasons why a Logic App would be better than a Web App in this instance?
Using Logic Apps has a few additional benefits over just writing code which include:
Out of box monitoring. For every execution you get to see exactly what happened in each step of the process with a monitoring view that replicates your Logic App design view.
Built in failure handling. Logic Apps will automatically retry calls on failure cases and also allows you to either customize the retry policy or have a custom retry policy with a do-until pattern.
Out of box alerting. You can configure alerts to inform you of failures.
Serverless. You don't worry about sizing or scaling and you pay by consumption.
Faster development. Logic Apps allows you to build out the solution faster especially as you consider that you don't have to code for monitoring views, alerting, and error handling that comes out of the box with Logic Apps.
Easy to extend. If you are already using a Logic App access to over a 125 connectors to various services will make it easy to add business value or making it smarter by including things like cognitive services to your workflow with very little extra effort.
I've decided to keep away from Logic Apps for these reasons:
It is not supported outside Azure. We aren't tied to any other providers, and to use Logic Apps would break that independence.
I don't know how much of the problem is readily soluble using Logic Apps. (It seems I will be solving all sorts of problems which wouldn't be problems if I was using C#. This article details some issues encountered while developing a simple process using an earlier version of Logic Apps.)
Nobody has come up with an argument more compelling than the reasons I've given above (especially the first one) why we should use it, so it would be a gamble with little to gain and plenty to lose.
You can think of Logic Apps as an orchestrator - something that takes external pieces of functionality, and weaves a workflow together.
It has nothing to do with your requirement of "writing code" - your code can be external functions on any platform - on-prem, AWS, Azure, Zendesk, and all of your code can be connected together using Logic Apps.
Regardless of which platform you choose, you will still have cross-cutting concerns such as monitoring, logging, alerting, deployments, etc, and Logic Apps addresses very robustly all of those requirements.

What factors to consider when choosing a Multi-model DBMS? (OrientDB vs ArangoDB)

I am looking to dip my hands into the world of Multi-Model DBMS, I have no particular use cases, just want to start learning.
I find that there are two prominent ones - OrientDB vs ArangoDB, but was unable to find any meaningful comparison, unopinionated between them. Can someone shed some light on the difference in features between the two, and any caveats in using one over the other? If I learn one would I be able to easily transition to the other?
(I tagged FoundationDB as well, but it is proprietary and I probably won't consider it)
This question asks for a general comparison between OrientDB vs ArangoDB for someone looking to learn about Multi-model DBMS, and not an opinionated answer about which is better.
Disclaimer: I would no longer recommend OrientDB, see my comments below.
I can provide a slightly less biased opinion, having used both ArangoDB and OrientDB. It's still biased as I'm the author of OrientDB's node.js driver - oriento but I don't have a vested interest in either company or product, I've just necessarily used OrientDB more.
ArangoDB and OrientDB are both targeting a similar market and have a lot of similarities:
Both are multi-model, you can use them to store documents, graphs and simple key / values.
Both have support for Gremlin, but it's firmly a second class citizen compared to their own preferred query languages.
Both support server-side "stored procedures" in JavaScript. In both systems this comes via a slightly less than idiomatic JavaScript API, although ArangoDB's is a lot better. This is getting fixed in a forthcoming version of OrientDB.
Both offer REST APIs, both aim to be usable as an "API Server" via JavaScript request handlers. This is a lot more practical in ArangoDB than OrientDB.
Both are distributed under a permissive license.
Both are ACID and have transaction support, but in both the transactions are server-side operations - they're more like atomic batches of commands rather than the kinds of transactions you might be used to in a traditional RDBMS.
However, there are a lot of differences:
ArangoDB has no concept of "links", which are a very useful feature in OrientDB. They allow unidirectional relationships (just like a hyperlink on the web), without the overhead of edges.
ArangoDB is written in C++ (and JavaScript), whereas OrientDB is written in Java. Both have their advantages:
Being written in C++ means ArangoDB uses V8, the same high performance JavaScript engine that powers node.js and Google Chrome. Whereas being written in Java means OrientDB uses Nashorn, which is still fast but not the fastest. This means that ArangoDB can offer a greater level of compatibility with the node.js ecosystem compared to OrientDB.
Being written in Java means that OrientDB runs on more platforms, including e.g. Raspberry PI. It also means that OrientDB can leverage a lot of other technologies written in Java, e.g. OrientDB has superb full text / geospatial search support via Lucene, which is not available to ArangoDB.
OrientDB uses a dialect of SQL as its query language, whereas ArangoDB uses its own custom language called AQL. In theory, AQL is better because it's designed explicitly for the problem, in practise though it feels quite similar to SQL but with different keywords, and is yet another language to learn while OrientDB's implementation feels a lot more comfortable if you're used to SQL. SQL is declarative whereas AQL is imperative - YMMV here.
ArangoDB is a "mostly-memory" database, it works best when most of your data fits in RAM. This may or may not be suitable for your needs. OrientDB doesn't have this restriction (but also loves RAM).
OrientDB is fully object oriented - it supports classes with properties and inheritance. This is exceptionally useful because it means that your database structure can map 1-1 to your application structure, with no need for ugly hacks like ActiveRecord. ArangoDB supports something fairly similar via models in Foxx, but it's more like an optional addon rather than a core part of how the database works.
ArangoDB offers a lot of flexibility via Foxx, but it has not been designed by people with strong server-side JS backgrounds and reinvents the wheel a lot of the time. Rather than leveraging frameworks like express for their request handling, they created their own clone of Sinatra, which of course makes it almost the same as express (express is also a Sinatra clone), but subtly different, and means that none of express's middleware or plugins can be reused. Similarly, they embed V8, but not libuv, which means they do not offer the same non blocking APIs as node.js and therefore users cannot be sure about whether a given npm module will work there. This means that non trivial applications cannot use ArangoDB as a replacement for the backend, which negates a lot of the potential usefulness of Foxx.
OrientDB supports first class property level and database level indices. You can query and insert into specific indexes directly for maximum efficiency. I've not seen support for this in ArangoDB.
OrientDB is the more established option, with many high profile users. ArangoDB is newer, less well known, but growing fast.
ArangoDB's documentation is excellent, and they offer official drivers for many different programming languages. OrientDB's documentation is not quite as good, and while there are drivers for most platforms, they're community powered and therefore not always kept up to date with bleeding edge OrientDB features.
If you're using Java (or a Java bridge), you can embed OrientDB directly within your application, as a library. This use case is not possible in ArangoDB.
OrientDB has the concept of users and roles, as well as Record Level Security. This may be a killer feature for you, it is for me. It also supports token based authentication, so it's possible to use OrientDB as your primary means of authorizing/authenticating users. OrientDB also has LDAP integration. In contrast, ArangoDB support only a very simple auth option.
Both systems have their own advantages, so choosing between them comes down to your own situation:
If you're building a small application, and you're a web developer optimizing for developer productivity, it will probably be easier to get up and running quickly with ArangoDB.
If you're building a larger application, which could potentially store many gigabytes or terabytes of data, or have many thousands of concurrent users, or have "enterprise" use cases, or need fine grained security controls, OrientDB is the one for you.
If you're storing RDF or similarly structured linked data, choose OrientDB.
If you're using Java, just choose OrientDB.
Note: This is (my opinion of) the state of play today, things change quickly and I would not underestimate the ruthless efficiency of the awesome team behind ArangoDB, I just think that it's not quite there yet :)
Charles Pick (codemix.com)

What are the main downfalls when using Google Web Toolkit (GWT)

After a long debate between many RIA/Ajax framework, we settled on GWT. When reading about it, this framework seams to be doing everything well and easy. But like any technologies, there is always down side and we we learn them the hard-way.
What are the main downfalls or problems when using Google Web Toolkit (GWT)?
(eg: Back/Forward Button support, Slow Response time, Layout Positioning, JavaScrit bugs, etc)
So far, I got the following from the response:
Lots of code for simple UI
Slow compilation
Thank you
I have been using GWT for nearly 2 years. Although I could be called a fanatic about GWT, there are some issues that one should know ...
As others have said, JavaScript compilation is slow. My application requires nearly 4 minutes for core i7 CPU, 8 GB memory. Total size of generated JavaScript is about 5 MB. But thanks to development mode, compilation to JavaScript is not needed frequently.
GWT RPC is extremely slow in development mode. It is 100 times slower than hosted mode. It was quite a big problem for us. We did consider giving up GWT just because of this reason. the reason for this sluggish performance of GWT RPC in dev-mode is serialization. Serialization of types other than String is unbelievably slow in dev mode. We did implement our custom serialization, it is nearly 30 times faster than GWT built-in serialization.
Claims that writing GWT application requires only knowledge of Java is just an illusion. You should have solid information about CSS and DOM. If you don't, you will spend too much time debugging your user interface.
You should consider that you can only use a small subset of the JDK to implement GWT applications. Reflection is not available; you should use third party libraries, such as GWT ENT, or write your own generator for reflection.
Another caveat that one should consider is the size of generated JavaScript by the GWT compiler. Most of the GWT applications consist of a single web page, as opposed to multi-paged traditional web applications. Therefore, loading of the application requires significant time. Although it could be mitigated by using a multi-module approach and code splitting, using these techniques is not always straightforward.
All calls to the server are asynchronous. You should adapt yourself to writing asynchronous code. And the downside of asynchronous code is it is more complex and less readable than the equivalent synchronous code.
Here are my observations on the downfalls:
steep learning curve if one wants to use GWT effectively in large applications, due to enormous number of high level conventions associated with GWT.
Model View Presenter paradigm - in fact there are 2 different approaches to MVP proposed on GWT site.
UiBinder
CellWidgets
Editors
concept of Activities and Places
RequestFactory
AutoBeans
asynchronous requests require different mode of thinking when it comes to designing the whole application
long compilation time, it does not affect Development Mode as much as full builds (all the permutations for all the browsers and languages are compiled which can take hours for big projects). JRebel can reduce requirement of Development Mode restarts a bit.
problems with unit testing - GWTTestCase starts so long that it is unusable for unit testing. However thanks to GWTTestSuite it can work well for integration testing. Thanks to keeping clean MVP it is also possible to unit test Presenter logic by mocking Displays (see my answer).
it requires some experience to decide whether specific logic should be implemented client side (compiled to JS) or server side.
and of course there are some small bugs, especially in new features like Editors and RequestFactory. They are usually resolved quickly with new releases, however it could be annoying when you encounter some GWT issue. Anyway the last downfall applies to any Java framework I have been using so far. ;)
lack of reflection on client side, which could be resolved with Deffered Binding and Generators, but it is another convention to learn.
If I was to start new GWT project I would:
add dependency on Google GIN library (unfortunately it does not work with GWT 2.2 at the moment, but should be compatible soon).
design general layout with LayoutPanels
structure application "flow" according to concept of Places and Activities.
put all the Places into separate GWT module (common navigation references)
put each Activity in own GWT module (it could help in application code splitting later on)
treat Activity as glue code which has View and Presenter providers injected with GIN
design data entities to be compatible with RequestFactory
create all data editors with UiBinder, MVP and Editors framework in mind
use RequestFactory in Presenters, as well as in Activities (to fetch initial data to be shown).
inject with GIN every identified common component like standard date format, etc.
The spring roo tool can generate a lot of GWT based code for standard application elements.
I did a prototype app with GWT some time ago, and I found that the time it takes to compile the java to javascript took a very long time. Even more the time to compile increased noteably with each line of code we wrote.
I just wasn't happy with the code, compile test phase getting slower and slower through time.
Another question on SO about the compiler: How do I speed up the gwt compiler?
I think that main disadvantage is that GWT often requires to write lots of code, to acomplish simple tasks (but it's getting better and better with each release). On the other hand it's brilliant when it comes to developing complex, custom widget where it shines.
During couple of projects GWT has proved to be very good in terms of performance and there hasn't been many bugs - it's very good in terms of cross-browser support imo.
as a fan of nativity...
I prefer JQuery rather than GWT, because, it's easy to animate or accomplish complecated tasks without writing many classes..

Does anyone have database, programming language/framework suggestions for a GUI point of sale system?

Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.

Resources