I need to create a doorman in my test corda network but I do not understand how to do it.
I can't find any tutorial for the Corda doorman.
This page seems to be the only source of info on the doorman service: Network permissioning(Doorman)
It is really difficult to implement the doorman service just using this page.
I don't know what to do, can somebody explain the doorman for dummies?
Pierre, I don't advise implementing your own Doorman - this would not be your time well-spent. Instead I suggest use standard tools and procedures to create, sign and distribute certificates to Corda nodes as specified in the ‘Network Permissioning’ section of the docs.
For when you start planning to deploy Corda to run real-world production workloads you would also be looking for a supported way of automating the management of network membership. Production deployments of Corda would also have other enterprise-grade requirements such as HA, DR, performance, manageability, compliance and security. These requirements will be addressed by upcoming release of Corda Enterprise. Naturally Doorman - the component which handles the management of network membership - will be part of Corda Enterprise.
For development and testing purposes there will be a stand-alone installable Doorman package with restricted license for non-production use.
Related
I would like to embed a graph database in my application (shipping for windows, linux & MAC). I narrowed the search down to ArangoDB & OrientDB. I was able to get embedded OrientDB to work but I'd still like to try ArangoDB to make an informed decision. Documentation for OrientDB embedded version is pretty clear while I can't find anything for ArangoDB. ArangoDB is written in C++ so I also have to figure out how to make it be portable across platforms and how to install it with my application. The usage of ArangoDB (or OrientDB) should be transparent to the users of our application. Thanks!
Update: I forgot to mention, our application is in C++. We were looking for instructions that can help us build ArangoDB binary with our existing modules. We then can figure out how to load the binaries and talk to them.
It's possible to install an instance of ArangoDB with your application installation.
It installs into it's own directory, and its key assets are:
ArangoDB Binaries
ArangoDB Data files
ArangoDB Log files
ArangoDB Foxx Applications (optional)
ArangoDB can run as a service, and it is configured via a file called arangod.conf.
This file centrally controls settings like the ports it runs on, the IP addresses it listens to, the database engine to use, SSL and security settings, and much more.
Taking Windows as an example, you can do a silent installation of ArangoDB, and then use tools like PowerShell or DOS batch files to stop/start the ArangoDB service, copy in an arangod.conf file with your required configuration settings, etc.
It's even possible to generate an SSL certificate and apply it to the ArangoDB instance so that you can have SSL connectivity to the database if required.
Additionally you can utilise the ArangoShell via scripts which allows you to create databases, restore default data from a backup, create ArangoDB users, assign rights.
It sounds like you need to get more comfortable with ArangoDB as a product, and then start to mess around with installing, uninstalling, configuring, and backing up/restoring databases.
I've also evaluated ArangoDB versus OrientDB, and I picked ArangoDB because it runs faster, has many more updates, and their driver packs are well written.
When it comes to embedded databases, you really need a multi-model database, and being able to store standard documents as well as graph data in one database engine, is invaluable.
Additionally, have a really good look at the Foxx MicroService architecture of ArangoDB. It allows you to host business logic behind REST API's and Job Queues running right in the ArangoDB database. This means your application doesn't even need raw table access to the database, rather it can access your data via a REST API and your internal schema is hidden from users, and your business logic stops them doing silly things and wrecking the database.
By having a REST API data layer between your application and the database, it gives you more flexibility on how people consume your data, giving you more options about opening it up in a safe way, knowing your application logic will keep your data safe.
If you chose to use Foxx, there is a cool new tool ArangoDB has released called foxx-cli which lets you script the installation and configuration of Foxx MicroServices in your database. This is a super powerful tool as it's possible to fully install and configure an ArangoDB server, database, and internal settings via installation scripts.
Take time to learn ArangoDB, as with all skills it takes time to really get to know it. I'm still learning something every day and I've only been using it for 2 years :)
If you're using NodeJS (which I have to assume as you don't mention what programming language you're using) as your platform you can use Electron (https://electron.atom.io) and use the ArangoJS (http://npmjs.com/package/arangojs) Driver, if an ORM is necessary I'd recommend using (http://npmjs.com/package/caminte) which has built-in support for ArangoDB, although the documentation being to a poor standard, it should be suffice with some programming knowledge.
OFT: Electron lets you create cross platform Desktop applications in pure HTML, JS and CSS. You can also use Cordova if you're targeting the mobile platform.
You could also use Foxx to perform some of your application logic (this is down to your personal preference) or also create an API platform (with for example Restify).
Most of database systems are written in C++ but that does not mean can only access them via C++, additional drivers are provided for the popular languages. If you use a specific language then update the question so we can help further.
You might also want to read this: https://www.arangodb.com/2018/02/nosql-performance-benchmark-2018-mongodb-postgresql-orientdb-neo4j-arangodb/ as to why ArangoDB would be a better choice for you.
Edit
Due to my limited experience in C++ I can only provide some references which I've saved earlier, but I'm sure they'll be of use to you.
For C++ the driver you should be using is:
https://www.arangodb.com/2017/11/introduction-fuerte-arangodb-c-plus-plus-driver/
An example of the usage of the driver:
https://www.arangodb.com/wp-content/uploads/2017/10/C-Example-Source-Code-File.cc
A simple example / tutorial on how to use graphing in ArangoDB:
https://docs.arangodb.com/3.2/Manual/Graphs/
A free course by Arango on Graphing:
https://www.arangodb.com/arangodb-graph-course/
Hope they help!
Natural thing about software is that you enhance it, thus you create next versions of it. How do you handle that in concern of Spotfire ?
At least two ways I can think of.
First, in 7.5 and above you can spin up a test node and copy down any dxp you want from live to develop in test. Once the "upgrade" or changes are complete you then would backup the live version to disk somewhere... anywhere you do other backups, and deploy the new version to live.
For pre-7.5 the idea is the same but you would have to create a test folder in live with restricted access to test your upgrade on a web player.
Strictly speaking of "what version are you on" in regards to Analytics like there is in software isn't really the same in my opinion. There should only be one version of the truth. If you are to run multiple versions you'd have to manage their updates separately for caching which is cumbersome in my opinion. Also, realizing the analytic has a GUID which relates to its information sources means that running them in parallel in the same environment will cause duplication.
If this isn't what you were shooting for I'd love for you to elaborate on the original post and clarify anything I assumed. Cheers mate.
EDIT
Regarding the changes in 7.5, see this article from Tibco starting on p.42 which explains that Spotfire has a new topology with a service oriented architecture. In 7.5 onward, IIS is no longer used and to access the web player you doesn't even go to the "web server" anymore. The application server handles all access and is the central point for authentication and management.
I recently landed myself on a project that, like most projects today, relies on multiple relational databases and also, like most projects today, relies on the flexibility and security of cloud computing.
I got into cloud services a little over a month ago and since then I've tackled the basics of most the services that Amazon Web Services offers but have only tested and deployed personal projects.
Now I will be working on a clients server and its a hefty instance, therefore I need to research the best method for developing a pre-existing application on a cloud server. Also bare in mind that the data stored in the databases is also being updated 'live'/dynamically.
I assume it's still good practise to take a local copy to work on? In that case, is the best method to download the whole server using ssh? If so, are there any alternatives? I feel that just downloading the whole server and setting it all back up bit by bit (including the data 'stream') will be very time consuming for such a big application connected to such big databases.
Is there something a little more elegant?
If you are working on a cloud environment you may have developement environment.
You may have a local repository but nothing more than your IDE and your versionning tool.
All your service might be provided by the cloud and the most efficient way to test you code is to do it on a target Image.
personally ,I use cloud foundry or IBM Bluemix with my git repository. I push directly the modifications when i want to test.
you can use cloud services,you dont have to think about server setup/services.you will just need a repository like git.
you can follow below sample link for getting started on configuration on bluemix:
https://hub.jazz.net/tutorials
https://www.facebook.com/ibmswg?ref=hl&ref_type=bookmark
We're developing a WPF business application for internal users, but this problem could apply to WinForms easily as well. We want to leverage a business rules engine to make modifying the rules in the future easier as well as to possibly let the business folks to do it themselves at some point.
BizTalk (we're using 2010) exposes its Business Rules Engine and, while complex, this looks to be a potentially worthwhile solution especially if we look to using it for future applications as well. We've loaded up a virtual server with the developer edition to try it out, as well as its own SQL Server instance to run off of.
Everything I've read (example and example) seems to show adding the BRE assemblies to the application project as references and then using the provided classes to call and execute policies. But they also suggest that these assemblies require a license and we can't exactly license BizTalk for each of the dozens of possible end users that will use this WPF app.
Am I wrong about the licensing issue? Is it okay (and normal) to deploy the BRE assemblies with your app to all client machines in order for them to communicate with the BizTalk server where the policies exist? Should I look into exposing the BRE API via a Web Service or something? Are there any implementations out there already for doing that? Exposing the API like that seems like no small undertaking... or is it?
Microsoft says that the BRE is only available for server-side usage, e.g., in BizTalk orchestrations, ASP.NET apps, and Windows Services running on a server. The engine cannot be embedded in client applications.
From their FAQ on licensing:
All technical support and licensing for the BRE is only for
server-side solutions. Note that you need to acquire a BizTalk Server
2010 license to utilize the Rules Engine, as the Rules Engine is
considered server software requiring a valid processor license. The
Rules Engine is not licensed separately from BizTalk Server.
Because of that, it may be worthwhile to look at using the BRE from an ASP.NET service that can be called from your WPF clients. If you want the clients to be able to update the rules, that is within the scope of the licensing agreement:
the Rules Composer is considered a client tool and may be installed on
a separate internal client device to support development and testing
of your BRE server solution
Be sure to check out Tellago's BRE Data Services API (available on CodePlex). They've done a lot of the work for you if you want to query the rules engine via your own service.
At first, I found a P2P CRM on http://www.ajatus.info/. But it was discontinued for years. And it is not natural to have a local web server. And the worst thing is that is hard to integrate its data with other data source for it used CouchDb.
So I draft a P2P CRM proposal and I am thinking to implement it.
Features:
Decentralization
Free( Free for software, no additional cost for related software)
Run Immediately (No installation needed, no configuration needed)
Social networking support.
Email and Contacts friendly
Basic architecture: 4 independent software.
1, Personal CRM
A Silverlight CRM application with a built- in SQL CE database. This is a completed package to run and no installation needed any more.
2, Central CRM
Central server is for performance and to simplify the support, which could be based on a typical SQL Server database of Splendid/Tiger CRM. This is a completed package also.
3, CRM Bridge
A bridge to synchronize the personal CRM and Central CRM. This will be an open source project for ANY CRM synchronize to the client. This is to be done by MS Sync Framework. ( MS Live Sync could be a better solution when it is ready and available in XP platform.)
4, Social Collector
A social data collector to collect all data from social networks and other data source. There is a good project in Codeplex.com (http://semsync.codeplex.com/) to collect and synchronize all contacts information together.
Scenario:
personal only.
Client to Central CRM directly (in DB layer).
Personal with synchronization to the central.
Any suggestions?
Ying
If Java is an option for you, the JXTA framework will help you with the P2P features of your application.
Sorry but I feel your base analysis is somewhat flawed.
"not natural to have a local web server"
By whose rules?
If you are a intenet application vendor (cloud computing hawkers) then they will tell you it is not natural.
The whole thing of P2P is to rethink those values.
If it makes sense put a web server on the localmachine put it on the local machine
Remember the original vision "The network is the computer";
Not "Large Data Centers are the computer"
2."CouchDB is hard to integrate"!
I think that is misinformed.
CouchDB has a RESTful JSON API that makes it about as integratable as you can get.
What you really mean is Couch doesn't fit into Visual Studio Development System like SQL Server. Which is true but doesn't make it hard to integrate data with other data.
There are some replication options you might want to look at.
To be honest what you are offering isn't much different than MS CRM with a social plugins module.
I think it would be difficult to get traction in the OSS Space and your gonna need help for a project that size.