QuickBooks allows users to change posted periods. How can I tell if a user does this?
I actually don't need an audit log, but just the ability to see recently added/edited data that has a transaction date that's over a month in the past.
In a meeting today it was suggested that we may need to refresh data for all our users going back as far as a year on a regular basis. This would be pretty time consuming, and I think unnecessary when the majority of the data isn't changing. But I need to find out how can I see if data (such as an expense) has been added to a prior period so I know when to pull it again.
Is there a way to query for data (in any object or report) based not on the date of the transaction, but based on the date it was entered/edited?
I'm asking this in regard to using the QBO api, however if you know how to find this information from the web portal that may also be helpful.
QuickBooks has a ChangeDataCapture endpoint which is specifically for exactly the purpose you are describing. It's documented here:
https://developer.intuit.com/app/developer/qbo/docs/api/accounting/all-entities/changedatacapture
The TLDR summary is this:
The change data capture (cdc) operation returns a list of objects that have changed since a specified time.
e.g. You can continually query this endpoint, and you'll only get back the data that has actually changed since the last time you hit the endpoint.
While designing an event delivery system with 10 different event types,I see two options to model topics on Google PubSub. Either each of the event type can be queued in a different topic or all the 10 events can be queued in the same topic. Each of the events will have different subscribers and it's possible to queue the events in the same topic but filter messages on the subscriber side. I am looking for a very high throughput with minimal latency, should I go for a single topic or multiple topics?
Filtering on the subscriber side isn't a good idea: you will use more resource (and cost more money) and you will inject latency (if the message is received by the wrong subscriber, it is rejected,...)
So, there is 2 solutions:
Create as many topics as you have type of event. But it's at the publisher to perform the filter and to publish in the correct topic. In addition, if you have new event types, you need to create a new topic and condition to deliver in it. Not ideal
Create only one topic and to filter at the subscriptions level (not the subscribers level as proposed in your question). For this, you need to add the event type as PubSub message attribute and then you can create the subscriptions that you want on the correct attribute filtering rule. The publishers don't know how the message are consumed and in case of new event type, simply create a new subscription with the correct filter.
I'm writing application that sync users and groups from Active Directory.
Specifically, I need to track their IDs, DNs and group membership, save them to local database.
I'm afraid of member attribute, as it can possibly have millions of values.
Production environments have been reported to exceed 4 million members, and Microsoft scalability testing reached 500 million members.
How to track changes of such gigantic mutli-valued attributes?
I'm using LDAP, UnboundID SDK.
Is it possible to query attribute value count?
Is it possible to know, if multi-valued attribute has been updated without reading it?
How to get iterative updates, similar to DirSync, but with USNChanged approach?
Here is what I know
As mentioned in microsoft docs, there are three ways to do synchronization:
USNChanged -- the most compatible way.
DirSync -- required near admin authorities, can sync only whole domain (partition), syncing arbitrary subtree is not possible. Returns only updated attributes, iterative updates for multi-valued attrs are possible.
Change Notifications -- async search request, scope can be BASE or ONE_LEVEL, can have up to 5 searches per connection. Each change sends the whole object.
I'm implementing USNChanged, cuz it's advised.
This is how to read attribute with a lot of values.
What is the best way to achieve DB consistency in microservice-based systems?
At the GOTO in Berlin, Martin Fowler was talking about microservices and one "rule" he mentioned was to keep "per-service" databases, which means that services cannot directly connect to a DB "owned" by another service.
This is super-nice and elegant but in practice it becomes a bit tricky. Suppose that you have a few services:
a frontend
an order-management service
a loyalty-program service
Now, a customer make a purchase on your frontend, which will call the order management service, which will save everything in the DB -- no problem. At this point, there will also be a call to the loyalty-program service so that it credits / debits points from your account.
Now, when everything is on the same DB / DB server it all becomes easy since you can run everything in one transaction: if the loyalty program service fails to write to the DB we can roll the whole thing back.
When we do DB operations throughout multiple services this isn't possible, as we don't rely on one connection / take advantage of running a single transaction.
What are the best patterns to keep things consistent and live a happy life?
I'm quite eager to hear your suggestions!..and thanks in advance!
This is super-nice and elegant but in practice it becomes a bit tricky
What it means "in practice" is that you need to design your microservices in such a way that the necessary business consistency is fulfilled when following the rule:
that services cannot directly connect to a DB "owned" by another service.
In other words - don't make any assumptions about their responsibilities and change the boundaries as needed until you can find a way to make that work.
Now, to your question:
What are the best patterns to keep things consistent and live a happy life?
For things that don't require immediate consistency, and updating loyalty points seems to fall in that category, you could use a reliable pub/sub pattern to dispatch events from one microservice to be processed by others. The reliable bit is that you'd want good retries, rollback, and idempotence (or transactionality) for the event processing stuff.
If you're running on .NET some examples of infrastructure that support this kind of reliability include NServiceBus and MassTransit. Full disclosure - I'm the founder of NServiceBus.
Update: Following comments regarding concerns about the loyalty points: "if balance updates are processed with delay, a customer may actually be able to order more items than they have points for".
Many people struggle with these kinds of requirements for strong consistency. The thing is that these kinds of scenarios can usually be dealt with by introducing additional rules, like if a user ends up with negative loyalty points notify them. If T goes by without the loyalty points being sorted out, notify the user that they will be charged M based on some conversion rate. This policy should be visible to customers when they use points to purchase stuff.
I don’t usually deal with microservices, and this might not be a good way of doing things, but here’s an idea:
To restate the problem, the system consists of three independent-but-communicating parts: the frontend, the order-management backend, and the loyalty-program backend. The frontend wants to make sure some state is saved in both the order-management backend and the loyalty-program backend.
One possible solution would be to implement some type of two-phase commit:
First, the frontend places a record in its own database with all the data. Call this the frontend record.
The frontend asks the order-management backend for a transaction ID, and passes it whatever data it would need to complete the action. The order-management backend stores this data in a staging area, associating with it a fresh transaction ID and returning that to the frontend.
The order-management transaction ID is stored as part of the frontend record.
The frontend asks the loyalty-program backend for a transaction ID, and passes it whatever data it would need to complete the action. The loyalty-program backend stores this data in a staging area, associating with it a fresh transaction ID and returning that to the frontend.
The loyalty-program transaction ID is stored as part of the frontend record.
The frontend tells the order-management backend to finalize the transaction associated with the transaction ID the frontend stored.
The frontend tells the loyalty-program backend to finalize the transaction associated with the transaction ID the frontend stored.
The frontend deletes its frontend record.
If this is implemented, the changes will not necessarily be atomic, but it will be eventually consistent. Let’s think of the places it could fail:
If it fails in the first step, no data will change.
If it fails in the second, third, fourth, or fifth, when the system comes back online it can scan through all frontend records, looking for records without an associated transaction ID (of either type). If it comes across any such record, it can replay beginning at step 2. (If there is a failure in step 3 or 5, there will be some abandoned records left in the backends, but it is never moved out of the staging area so it is OK.)
If it fails in the sixth, seventh, or eighth step, when the system comes back online it can look for all frontend records with both transaction IDs filled in. It can then query the backends to see the state of these transactions—committed or uncommitted. Depending on which have been committed, it can resume from the appropriate step.
I agree with what #Udi Dahan said. Just want to add to his answer.
I think you need to persist the request to the loyalty program so that if it fails it can be done at some other point. There are various ways to word/do this.
1) Make the loyalty program API failure recoverable. That is to say it can persist requests so that they do not get lost and can be recovered (re-executed) at some later point.
2) Execute the loyalty program requests asynchronously. That is to say, persist the request somewhere first then allow the service to read it from this persisted store. Only remove from the persisted store when successfully executed.
3) Do what Udi said, and place it on a good queue (pub/sub pattern to be exact). This usually requires that the subscriber do one of two things... either persist the request before removing from the queue (goto 1) --OR-- first borrow the request from the queue, then after successfully processing the request, have the request removed from the queue (this is my preference).
All three accomplish the same thing. They move the request to a persisted place where it can be worked on till successful completion. The request is never lost, and retried if necessary till a satisfactory state is reached.
I like to use the example of a relay race. Each service or piece of code must take hold and ownership of the request before allowing the previous piece of code to let go of it. Once it's handed off, the current owner must not lose the request till it gets processed or handed off to some other piece of code.
Even for distributed transactions you can get into "transaction in doubt status" if one of the participants crashes in the midst of the transaction. If you design the services as idempotent operation then life becomes a bit easier. One can write programs to fulfill business conditions without XA. Pat Helland has written excellent paper on this called "Life Beyond XA". Basically the approach is to make as minimum assumptions about remote entities as possible. He also illustrated an approach called Open Nested Transactions (http://www.cidrdb.org/cidr2013/Papers/CIDR13_Paper142.pdf) to model business processes. In this specific case, Purchase transaction would be top level flow and loyalty and order management will be next level flows. The trick is to crate granular services as idempotent services with compensation logic. So if any thing fails anywhere in the flow, individual services can compensate for it. So e.g. if order fails for some reason, loyalty can deduct the accrued point for that purchase.
Other approach is to model using eventual consistency using CALM or CRDTs. I've written a blog to highlight using CALM in real life - http://shripad-agashe.github.io/2015/08/Art-Of-Disorderly-Programming May be it will help you.
I'm creating a website which has a premium user feature. I'm thinking on how to design the database to store the premium user plan, and how to check it..
My main idea so far is:
Having 2 fields on the user table: premium (boolean) and expires (date)
When user does payment, it will calculate the plan duration, set premium to 1, and the expire date to the end of the duration
Every time I check if user->isPremium(), it will also check if it's expired.. if so, set it back to zero and offer a renewal
Aside from this, all payments /transactions will be recorded in a logs table for record keeping.
This is a simple design I thought, but since this is a common feature on many websites, I thought of asking you guys how do the pros do this?
This probably won't make much difference on the design, but I'll use Stripe for handling payments.
It looks good to me. It is simple and solves your problem.
Hint 1: Depending on the semantics of your premium and expires fields, you do not need both. You can just change your user->isPremium() to check if the expires date has passed. Make sure you also change how you handle offering a renew.
Hint 2: I am working in a system that handles plan subscriptions and I had to deal with the following cases:
Permit users renew/extend the subscription before expiration date.
Different prices for different durations.
Discounts.
The delay between bill generation and payment confirmation.
Users with pending payments trying to buy again.
Users asking to cancel current subscriptions.
Hope it helps.