I've been evaluating Spring XD for a major project and I'm wondering how to update module code with zero downtime.
It seems that, for updating a module, it first needs to be deleted. And deleting a module implies destroying streams.
Several ideas comes to my mind on how to do this with zero downtime (like rerouting a stream to a queue). Any other ideas, ways or solutions?
I had the same problem as yours, and one way to approach it is to use versioning in your module name - in the same way you use versioning for your jar file name. That way you can try a newer version of your module without removing the old one, and remove the old one (if you want) once you deploy and test your new stream.
Related
We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.
file:/../..?noop=true&scheduler=quartz2&scheduler.cron=0+0/10+*+*+*+?
Using noop=true allows me to have the files at same place after the route consumes the files but it also enables idempotent which I don't want. (There is second route which will do deletion based on some other logic so the first route shouldn't lead to infinite loop by consuming non-idempotently I believe)
I think I can overwrite the file and use idempotentKey as ${file:name}-${file:modified} so that the file will be picked up on next polling but that still means extra write. Or just deleting and creating the same file also should work but again not a clean approach.
Is there a better way to accomplish this? I could not find it in Camel documentation.
Edit: To summarize I want to read the same files over and over in a scheduled manner (say every 10 mins) from the same repo. SOLVED! - Answer below.
Camel Version: 2.14.1
Thanks!
SOLVED!
file:/../..?noop=true&idempotent=false&scheduler=quartz2&scheduler.cron=0+0/10+*+*+*+?
Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.
Hi currenlty i'm start to use TortoiseSVN, before that i'm using CVS.
Assuming, we have concurrent changes on the same project and each of the changes need to release in different time.
In CVS, when we need to commit a file and we would write down some message in message text box(assume i put in CR00001) and then when we deploying the application, we just get all files that with message equal ='CR0001'. So we've no worry about wrong version to release.
Is there a way for me to do this in TortoiseSVN?
Please help, Thanks.
You can always use TortoiseSVN to update to a specific revision.
For what you're trying to do, though, consider creating a RELEASE branch of the code, then use svnmerge to manage the promotion of specific changesets.
We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?