MyOB's AccountRight Documentation page provides a sample for Editing a Customer based on its UID & RowVersion. However, it does not cover how to update specific address records associated to that customer.
Say for example i have a customer with business name "My Business Customer" and has 5 addresses saved on it. How do i update Address #3 while keeping the original records for Addresses 1, 2, 4, and 5?
Adding only the specific updated Address Record on the Customers "Address" JSON Property removes all other address but that.
What you've described is the desire to use the PATCH HTTP verb. Unfortunately, last I checked (and as per the current docs, MYOB APIs still only support PUT which means you have to provide the full, complete JSON object as it essentially 'replaces' what's in the customers company file.
Your API calls and code would follow something similar to the following steps:
GET the /Contact/Customer/{guid}
Make modifications to the data (in your case, update Address 3)
PUT to the /Contact/Customer/{guid} URL with your updated object.
Naturally you might not want to do this every time, so you can GET and cache the result, and use the RowVersion to determine if there's something out of date in your cache. If so, expect a HTTP 409 error because the RowVersion you provide in your PUT doesn't match the latest RowVersion of the resource in the API - but the errors will help guide you there.
Related
Salesforce provides CaseMilestone table. Each time I call the API to get a same object, I noticed that TimeRemainingInMins field has a different value. So I guessed this field is auto-calculated each time I call the API.
Is there a way to know what fields in a table are auto-calculated ?
Note : I am using python simple-salesforce library.
Case milestone is special because it's used as countdown to service level agreement (SLA) violation, drives some escalation rules. Depending on how admin configured the clock you may notice it stops for weekends, bank holidays or maybe count only Mon-Fri 9-17...
Out of the box other place that may have similar functionality is OpportunityHistory table. Don't remember exactly but it's used by SF for for duration reporting, how long oppty spent in each stage.
That's standard. For custom fields that change every time you read them despite nothing actually changing the record (lastmodifiedate staying same) - your admin could have created formula fields based on "NOW()" or "TODAY()", these would also recalculate every time you read them. You'd need some "describe" calls to get the field types and the formula itself.
This is a noob (I think) Event Sourcing
question.
As an example we have:
a shoemaker Bob
a customer Alice
Shoemaker Bob has:
"shoemaker_business_locations" (business location addresses)
first address is "88d8 - 5 Baker Street" (Added in January)
second address is "73c6 - 6 Cadman Plaza" (Added in March)
at certain point (in October) shoemaker moves from
"5 Baker street" to "11 Baker street" (couple of houses down the block)
Say we have a "shoe_repair_order" (Aggregate Root?)
We have a shoe repair order from Alice that happened
sometime in July, before shoemaker Bob moved
his business location down the block.
Let's say that Alice is looking at this order in December.
Obviously she does not have to know that Bob moved his business.
When she sees her order, I think she should see (at least) the old
business location and/or
"Ordered at 5 Baker Street (New address is 11 Baker street - updated on October 6)".
Question: How do we design the Event-Sourced system to show the old addresses for orders?
Do we have to always store an event for our order, like
"OrderLocationAdded" and store a copy of an actual address with our order event -
that way every order would have a correct old location simply taken from events.
Only by issuing new events such as "OrderLocationUpdated" we can then update it
for that particular order if need be.
OR
Do we issue an "OrderLocationAdded" event with a reference to an address
"88d8 AT VERSION 1", so that we know to look for VERSION 1 of our business location.
We will have to replay events on "shoemaker_business_locations"
until we see "version X" of our address (version 1 in our case).
That is what we will have to do to all of older orders whose version of
addresses are below the version that is set currently (version 1).
Some other way?
Maybe our read-model just stores all of the address information at different versions, and when queried for a certain version we
don't have to replay events in our write-model, but simply grab an address that is at version 1 from our read-model?
I think the best way to do this will ultimately depend on how you've designed your Domain Model, but here's how I would approach the problem:
We need to represent Addresses in our Domain Model. This means, addresses will either be Value Objects, Entities, or Aggregates. Of the three, an Address could really only be a Value Object or an Entity in this Domain, since we are a shoe repair application and not an address book. Entities have certain properties, such as being distinguished by their ID, and certain restrictions, like only being part of one Aggregate, that Addresses don't seem to meet. Therefore, I would see an Address being a Value Object in this Domain.
When defining the OrderLocationAdded event on the "shoe_repair_order" Aggregate, you need to decide whether to include the full details of the Address on the event, or use a reference to the Address. Since the Address is a Value Object (it is defined by its data, not by an ID), the answer would be to include the Value Object on the event. This way, when you replay the events to build the current state of the order, it will still have the details of the original address.
If you want to also display the new address on the order, you can use another event, such as BusinessMovedToNewLocation, with the details of the new address, and add this to the order. This event would not replace the original address, it would just allow you display the informational message that the business is now located at a different location.
The answer to your problem certainly depends on how you have defined your domain models (specifically how your Aggregates are defined).The way I would approach this problem is to have two aggregates:
ShoeMaker Aggregate (which in your case stores all the information like name of the shoemaker business, info about this business' owners, list of all the addresses for this shoemaker where each address can have information like email, phone number, and a bool field called current which can be true for the current address. I imagine this AR would have a command called - "ChangeShoeMakerCurrentAddress" which would raise an event called "ShoeMakerAddressesChanged".
Order Aggregate (which would have all the information about an order like customer name, price, details, order date, delivery date etc.).
Let's say we have a read-model called "ShoeMakerOrderReadModel" which can subscribe to events raised by ShoeMaker Aggregate and Order Aggregate. This read-model listens to events and denormalize them into a POJO/POCO object. This POCO/POJO model can have a list of shoemaker addresses (where each address can have information like email, phone number, and a bool field called current which can be true for the current address.) This AR can definitely subscribe to "ShoeMakerAddressesChanged" events from ShoeMaker Aggregate and then update the list of shoemaker addresses.
I hope this helps!
The answer is very simple thanks to CQRS: you must not listen to BusinessMovedToANewLocationEvent in the OrderDetailsReadModel. You just lookup and store only once the current Business address in the Order details in this Read model when the OrderPlacedEvent happens.
In this way you don't have to include any Address value object into the event. The Read model does all the work.
You could listen to the event, if you need for other reasons, but just don't update the address. Fir example, the Read model could be even smarter and additionally keep an addressChangedInTheMeanTime boolean flag in order to tell Alice that the Address that she sees is an old one but thus depends on your business needs.
Here are a few options:
In your shoemaker read model, include the list of addresses including the dates they became active. This way, when you process an order event in your order projection, you query the shoemaker read model and get the address that's correct for the order date (this will work even if you are rebuilding the projection later). The disadvantage of this are coupling between the two views (order projection now depends on shoemaker's query API, so you need to make sure the shoemaker projection is up to date and available)
If you have a way of subscribing to multiple event streams so you receive events in approximate global order, in your order projection also handle shoemaker address events (creation and updates). Keep a mini table of shoemaker to 'current' address (meaning current as of the events you have processed, allowing replays to work correctly). Then when handling order creation events, just lookup the current address in the table and copy it into the order view. This is a bit more code, but avoids dependencies between views and avoids the need for address history in the shoemaker view (although you might need that anyway for other reasons).
Make the shoemaker address part of your order domain model. It will need to be included in the order creation command (possibly by command enrichment in the controller layer, looking it up in the shoemaker read view). This doesn't seem necessary since the order doesn't have a concept of the shop address that's separate from the shop's concept (this is different from, say, delivery address, which is usually a per-order item in some way).
In a Cloudant database, what is the expected behavior of calling PUT on a document that doesn't exist with a revision defined?
The documentation says:
To update (or create) a document, make a PUT request with the updated
JSON content and the latest _rev value (not needed for creating new
documents) to https://$USERNAME.cloudant.com/$DATABASE/$DOCUMENT_ID.
I had assumed that if I did provide a revision, that the db would detect that it was not a match and reject the request. In my test cases I have inconsistent behavior. Most of the time I get the expected 409, Document update conflict. However, occasionally, the document ends up getting created (201), and assigned the next revision.
My test consists of creating a document and then using that revision to update a different document.
POST https://{url}/{db} {_id: "T1"} - store the returned revision
PUT https://{url}/{db}/T2 {_rev: }
So if the revision returned was something like 1-79c389ffdbcfe6c33ced242a13f2b6f2, then in the cases where the PUT succeeds, it returns the next revision (like 2-76054ab954c0ef41e9b82f732116154b).
EDIT
If I simplify the test to one step, I can also get different results.
PUT https://{url}/{db}/DoesNotExist {_rev: "1-ffffffffffffffffffffffffffffffff"}
Cloudant is an eventually consistent database. You're seeing the effects of that. Most of the time the cluster has time to reach a consistent state between your two api calls and you'll get the expected update conflict. Sometimes you hit the inconsistency window, as your first call has not yet been replicated around the cluster and you hit a different node. It's a valuable insight: it's not safe to read your writes.
Most of the operations in my silverlight client are things that add/update/insert/delete multiple entities in one go.
E.g:
CreateStandardCustomer adds a Customer, Address, Person and Contract record.
CreateEnterpriseCustomer adds a Customer, Address, 2x Person and a CreditLimit record.
It looks like with a DomainService you can only do one thing at a time, e.g. add a customer record, add an address etc. How can I do a batch operation?
You might say to simply add the relevant records from the Silverlight client and call the SubmitChanges() method. However this is difficult to validate against (server side) because only certain groups of records can be added/update/deleted at a time. E.g. in the example above, an Address record added alone would not be valid in this system.
Another example would be something like Renew which updates a Customer record and adds a Renewal. These operations aren't valid individually.
Thanks for your help,
Kurren
EDIT: The server side validation needs to check that the correct operations in the batch has taken place. E.g. From the example above we Renew then a Renewal should be created and a Customer should have been updated (one without the other is invalid).
I may be missing something here, but you update a batch of entities the same way you do individual entities: namely perform all the operations on your context and then call SubmitChanges on that context. At the server your insert/delete/update methods for the types will be called as appropriate for all the changes you're submitting.
We use RIA/EF in Silverlight to do exactly that. It doesn't matter if you just create a single entity in your client context (complete with graph) or 100, because as soon as you submit those changes the complete changeset for that context is operated upon.
EDIT: Failing setting up your entity metadata with Required and Composition attributes on the appropriate properties, you can also use the DomainService.ChangeSet object to inspect what has been submitted and make decisions on what changes you want to accept or not.
I maintain a global repository of sites in a table.
website:
id, name, url
1 google http://www.google.com/
2 CNN http://www.cnn.com/
3 SO http://www.stackoverflow.com/
I maintain a reference table, which stores the the website id's the user has stored.
userwebsite
userid, websiteid
[attributes of the table]
Say a user is interested to save microsoft; in his collection, he enters
www.microsoft.com
As the website doesn't exist in the global repository, it first sits in the repository and then gets added to his collection. Now the contents of both the tables looks something like this:
website:
id, name, url
1 google http://www.google.com/
2 CNN http://www.cnn.com/
3 SO http://www.stackoverflow.com/
4 msft http://www.microsoft.com
userwebsite:
userid, websiteid
1 4
Say a user is interested in saving google in his collection, and he enters
www.google.com
As the website is already existing in the collection, instead of adding the website to the collection, only the reference gets added to the user collection.
The place where am stuck,
both www.google.com and http://www.google.com/
semantically they point out to the same site, but when you try to match them they are 2 distinct strings. How should I go about matching the strings in such cases?
One solution I think of is, input a site first check if the domain exists in the collection of websites (probably a PATINDEX will do good here), by doing this you get a list of sites which have the save domain name. and then check if the path exists in any of the resultant websites. Is this is a good idea?
Does a significant solution exist to this problem? Are there any better methods to go about?
You don't need pattern matching in this case, what you are really asking for (to continue from what Matteo commented about) is a way of validating web addresses and storing them in a consistent way. But if you want a regular expression to at least determine if the address is valid you can have a look here: http://www.shauninman.com/archive/2006/05/08/validating_domain_names
Or use Javascript to validate it although you don't say what language you are using outside of the SQL server.
It's almost the case you need to send the domain name to a Domain Name Server to resolve before storing it in your table. It may be better to ignore the fact they are web addresses and just think of them as strings. For example, how would you ensure peoples names were compared correctly in a database? The first step is usually to ensure upper or lower case is used; from then on it becomes more difficult such as handling middle names/initials which may be omitted.