Can you export a Queue from Dynamics CRM?
I have a number of workflows that assign certain tasks to Queues for users to pick up later. There doesn't seem to be a way to export a Queue from CRM with the other customisations which means all the queues have to be setup again after a deployment. Am I just missing where to export Queues as I don't really want to write database scripts?
Kind regards.
There is no supported way, that I'm aware of, to migrate Queues. Unfortunately you'll need to set them up in your new environment. Also, all workflows that make reference to these queues will break when you migrate customizations.
If you are importing a teams in DCRM, it will automatically create a default queue for that team.
So if you are also planning to queue which usually happens, while working with queues.
Go just import teams and you will see autiomatically one default queue is created for that team.
In your import teams.csv leave the column blank for default queues.
Related
As part of migrating my Google App Engine Standard project from python2 to python3, it looks like I also need to switch from using the Taskqueue API & Library to google-cloud-tasks.
In the taskqueue library I could enqueue upto 100 tasks at a time like this
taskqueue.Queue('default').add([...task objects...])
as well as enqueue tasks asynchronously.
In the new library as well as the new API, it looks like you can only enqueue tasks one at a time
https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/create
https://googleapis.dev/python/cloudtasks/latest/gapic/v2/api.html#google.cloud.tasks_v2.CloudTasksClient.create_task
I have an endpoint where it receives a batch with thousands of elements, each of which need to get processed in an individual task. How should I go about this?
According to the official documentation (reference 1, reference 2) the feature of adding task to queues asynchronously (as this post suggests for adding bulk number of tasks to a queue), is NOT an available feature via Cloud Tasks API. It is available for the users of App Engine SDK though.
However, there is a reference in the documentation regarding adding a large number of Cloud Tasks to a queue via double-injection pattern workaround (this post might seem useful too).
To implement this scenario, you'll need to create a new injector queue, whose single task would contain information to add multiple(100) tasks of the original queue that you're using. On the receiving end of this injector queue would be a service which does the actual addition of the intended tasks to your original queue. Although the addition of tasks in this service will be synchronous and 1-by-1, it will provide an asynchronous interface to your main application to bulk add tasks. In such a way you can overcome the limits of synchronous, 1-by-1 task addition in your main application.
Note that the 500/50/5 pattern of task addition to queue is a suggested method, in order to avoid any (queue/target) overloads.
As I did not find any examples of this implementation, I will edit the answer as soon as I find one.
Since you are in a migration process, I figured out that this link would be useful, as it concerns migrating from Task Queue to Cloud Tasks (as you stated you are thinking to do).
Additional information on migrating your code with all the available details you can find here and here, regarding Pull queues to Cloud Pub/Sub Migration and Push queues to Cloud Tasks Migration correspondingly.
In order to recreate a batch pull mechanism, you would have to switch to Pub/Sub. Cloud Tasks does not have pull queues. With Pub/Sub you can batch push and batch pull messages.
If you are using a push queue architecture, I would recommend passing those elements as the task payload; however the max task size is limited to 100kb.
We need to keep our Firebase data in sync with other databases for full-text search (in ElasticSearch) and other kinds of queries that Firebase doesn't easily support.
This needs to be as close to real-time as possible, we can't just export a nightly dump of the Firebase JSON or anything like that, aside from the fact that this will get rather large.
My initial thought was to run a Node.js client which listens to child_changed, child_added, child_removed etc... events of all the main lists, but this could get a bit unweildy and would it be a reliable way of syncing if the client re-connects after a period of time?
My next thought was to maintain a list of "items changed" events and write to that every time an item is created/updated, similar to the Firebase work queue example. The queue could contain the full path to the data which has changed and the worker just consumes that and updates the local database accordingly.
The problem here is every bit of code which makes updates has to remember to write to this queue otherwise the two systems will get out of sync. Some proxy code shouldn't be too hard to write though.
Has anyone else done anything similar with any success?
For search queries, you can integrate directly with ElasticSearch; there is no need to sync with a secondary database. Firebase has a blog post about integrating and a lib, Flashlight, to make this quick and painless.
Another option is to use the logstash-input-firebase Logstash plugin in order to listen to changes in your Firebase real-time database(s) and forward the data in real-time to Elasticsearch using an elasticsearch output.
I am new to the community and looking forward to being a contributing member. I wanted to throw this out there and see if anyone had an advice:
I am currently in the middle of developing a MVC 3 app that controls various SQL Jobs. It basically allows user to schedule jobs to be completed in the future, but also also allows them to run jobs on demand.
I was thinking of having a thread run in the web app that pulls entity information into an XML file, and writing a window service to monitor this file to perform the requested jobs. Does this sound like a good method? Has anyone done something like this before and used a different approach? Any advice would be great. I will keep the forum posted on progress and practices.
Thanks
I can see you running into some issues using a file for complex communication between processes - files can generally only be written by one process at a time, so what happens if the worker process tries to remove a task at the same time as the web process tries to add a task?
A better approach would be to store the tasks in a database that is accessible to both processes - a database can be written to by multiple processes, and it is easy to select all tasks that have a scheduled date in the past.
Using a database you don't get to use FileSystemWatcher, which I suspect is one of the main reasons you want to use a file. If you really need the job to run instantly there are various sorts of messaging you could use, but for most purposes you can just check the queue table on a timer.
After a user submitted data to my app, I'd like to write to the
database asynchronously, possibly through a message queue.
How do I set up such a system? Are there any pluggable Django apps
that do such message queue-based database writes?
Also how do i handle errors that happens during the async processing?
Would really appreciate any pointers you can give me. Thank you.
Celery as a queue mechanism with a processor on the back end. It's one of the simpler setups, and very effective. You can back it with persistence, or not, as you need. There's a good walk through on setting it with django up on the website as well. Typically you'll run a queue processor as a daemon, import the model bits from Django if you're using those, and do the updates/inserts/etc as you need.
The documentation includes an example of processing a serial task that you can use as a template.
You could take a look at Celery with RabbitMQ or another ghetto queue.
I have been building a multi-user database application (in C#/WPF 4.0) that manages tasks for all employees of a company. I now need to add some functionality such as sending an email reminder to someone when a critical task is due. How should this be done? Obviously I don’t want every instance of the program to be performing this function (Heh each user would get 10+ emails).
Should I add the capability to the application as a "Mode" and then run a copy on the database server in this mode or would it be better to create a new app altogether to perform "Global" type tasks? Is there a better way?
You could create a windows service/wcf service that would poll the database at regular intervals for any pending tasks and send mails accordingly.
Some flag would be needed to indicate whether email is send or not for a particular task.