Slick lifted update on an Object - database

I do updates on lifted entities using Slick. This code updates the firstName of a Contact object:
def updateContact(id: Int, firstName: Option[String]): Unit = {
val q1 = for {
c <- Contacts
if c.id is id
} yield c.firstName
// Update value with same or new value
q1.update(firstName.getOrElse(q1.list().head))
}
The option here is already useful for updating the value in case it is a Some (although it would be nicer if the update only happened if there is a new value).
What I am looking for is a way to query the object by ID, then do all the updates in memory using getOrElse and then do an update on the whole object.
Else I have to run the above for each field of the object which works but you know, feels like a dirty hack.

Instead of q1.update(firstName.getOrElse(q1.list().head))
you can write firstName.foreach{ fn => q1.update(fn) }
which is shorter, simpler, one instead of two queries :).
Using foreach on Option stops looking weird when you think of it as a collection with one or zero elements.
Regarding your idea to fetch the whole object, modify it and save it back, you can do it like this:
def updateContact(id: Int, firstName: Option[String], lastName:Option[String], ...): Unit = {
val q1 = Query(Contacts).filter(_.id === id)
val c = q1.first
val modifiedC = c.copy(
firstName = firstName.getOrElse(c.firstName),
lastName = lastName.getOrElse(c.lastName),
...
)
q1.update(modifiedC)
}
Here is another example: http://sysgears.com/notes/how-to-update-entire-database-record-using-slick/
This is clean and simple and probably the best way to do it if performance is not mission critical as this always transfers all columns of Contacts. You can save some traffic by only transferring selected columns.

Related

Salesforce Apex Class update custom object lookup field with id from parent

I am new to Apex and I’m struggling with creating a class to help me with some data analysis. I have data from a 3rd party (transactions__C) that has a field (fin_acct_txt__c) that is the pointer to another object (fin_accounts__C). I want to updated transactions__c with the id from fin_accounts__C into the lookup field transactions__c.fin_acct__c.
I want to do this in a class versus a trigger as there would be thousands of records loaded from the 3rd party on a monthly basis. I think doing this in bulk would be more efficient.
My thought is I create a list for transactions__c and a map for fin_accounts__c. Using the fin_acct_txt__c=fin_accounts__c.name I would be able to get the fin_accounts__c.id and update the transactions__c.fin_acct__c with that data.
But being new to Apex seems to be causing me some problems that I’m unsure how to resolve.
Here’s a copy of what I’ve done to date:
public class updateTxnFinAcctID {
// Build map of financial accts since that is unique
map<string ,fin_acct__c> finAccts = new map<string, fin_acct__c>
([select id,name from fin_acct__c where name!=null]);
//Iterate through the map to find the id to update the transactions
{
for(fin_acct__c finAcct: finAccts.values())
{
if (finAcct.name != Null)
{
finAccts.put(finAcct.name, finAcct);
}
// Find all records in transaction__c where fin_acct__c is null
//and the pointer is the name in the map
list<Transaction__c> txns =[
select id,fin_acct_txt__c from Transaction__c where fin_acct__c = null
and fin_acct_txt__c=:finaccts[0].name];
//create the list that will be used to update the transaction__c
list <Transaction__c> txnUpdate = new list <Transaction__c>();
{
//Find the id from fin_acct__c where name = fin_acct_txt__c
for (Transaction__c txn: txns){
finacct[0].Id =txn.fin_acct__c;
txnUpdate.add(txn);
}
//3. Update transaction with ID
}
}
// if (txnUpdate.size()>0 { update txnUpdate};
system.debug(txnUpdate.size());
}
}
I seem to be in a doom loop. The error I get is “Expression must be a list type: Map” pointing to the list txns = [ …]. But as that is not unique, it must be list. But I would believe I’ve got something structurally wrong here and that is a symptom of a larger issue.
Thanks.
I tried to understand what should to do your code, and I have a few tips, possibly they help to solve your issue:
1) In first loop over values of map finAccts you really don't need validation with if (finAcct.name != Null), because you already add it in SOQL query (where name!=null).
2) It's a bad practice - to put as a key to map different entities (for example, Ids and Names). I mean that when you queried fin_acct__c into the map finAccts, keys of the map are Ids of fin_acct__c. And then in the first loop you put in the same map the same objects only using their names as a key. If you really need such map with names as a keys is better to create new map and put the data there.
3) You execute SOQL query to Transaction__c object into the loop. It is likely to be the cause of an exception related to the SF limits (Especially if you are sure that the code will handle large amounts of data). Better collect all fin_acct__c names in list and move SOQL query out from the loop, using IN instead of = in where condition.
If I understood correctly that fin_acct_txt__c field contains names, not Ids, your class should looks something like:
public class updateTxnFinAcctID {
Map<String ,fin_acct__c> finAccts = new Map<String, fin_acct__c>
([select id,name from fin_acct__c where Name != null]);
Map<String, fin_acct__c> finAcctByNames = new Map<String, fin_acct__c>();
for(fin_acct__c finAcct: finAccts.values()){
finAcctByNames.put(finAcct.Name, finAcct);
}
List<Transaction__c> txns =[select id, fin_acct_txt__c, fin_acct__c
from Transaction__c where fin_acct__c = null
and fin_acct_txt__c IN finAcctByNames.keySet()];
List <Transaction__c> txnUpdate = new List<Transaction__c>();
for (Transaction__c txn: txns){
fin_acct__c relatedFinAcct = finAcctByNames.get(txn.fin_acct_txt__c);
if(relatedFinAcct != null){
txn.fin_acct__c = relatedFinAcct.Id;
txnUpdate.add(txn);
}
}
if(!txnUpdate.isEmpty()){
update txnUpdate;
system.debug(txnUpdate.size());
}
}
It possibly can contains some spelling mistakes, but it's a common idea.

How to Fetch a set of Specific Keys in Firebase?

Say I'd like to fetch only items that contains keys: "-Ju2-oZ8sJIES8_shkTv", "-Ju2-zGVMuX9tMGfySko", and "-Ju202XUwybotkDPloeo".
var items = new Firebase("https://hello-cambodia.firebaseio.com/items");
items.orderByKey().equalTo("-Ju2-gVQbXNgxMlojo-T").once('value', function(snap1){
items.orderByKey().equalTo("-Ju2-zGVMuX9tMGfySko").once('value', function(snap2){
items.orderByKey().equalTo("-Ju202XUwybotkDPloeo").once('value', function(snap3){
console.log(snap1.val());
console.log(snap2.val());
console.log(snap3.val());
})
})
});
I don't feel that this is the right way to fetch the items, especially, when I have 1000 keys over to fetch from.
If possible, I really hope for something where I can give a set of array
like
var itemKeys = ["-Ju2-gVQbXNgxMlojo-T","-Ju2-zGVMuX9tMGfySko", "-Ju202XUwybotkDPloeo"];
var items = new Firebase("https://hello-cambodia.firebaseio.com/items");
items.orderByKey().equalTo(itemKeys).once('value', function(snap){
console.log(snap.val());
});
Any suggestions would be appreciated.
Thanks
Doing this:
items.orderByKey().equalTo("-Ju2-gVQbXNgxMlojo-T")
Gives exactly the same result as:
items.child("-Ju2-gVQbXNgxMlojo-T")
But the latter is not only more readable, it will also prevent the need for scanning indexes.
But what you have to answer is why want to select these three items? Is it because they all have the same status? Because they fell into a specific date range? Because the user selected them in a list? As soon as you can identify the reason for selecting these three items, you can look to convert the selection into a query. E.g.
var recentItems = ref.orderByChild("createdTimestamp")
.startAt(Date.now() - 24*60*60*1000)
.endAt(Date.now());
recentItems.on('child_added'...
This query would give you the items of the past day, if you had a field with the timestamp.
You can use Firebase child. For example,
var currFirebaseRoom = new Firebase(yourFirebaseURL)
var userRef = currFirebaseRoom.child('users');
Now you can access this child with
userRef.on('value', function(userSnapshot) {
//your code
}
You generally should not be access things using the Firebase keys. Create a child called data and put all your values there and then you can access them through that child reference.

How to add items to an array one by one in groovy language

I´m developing a grails app, and I already have a domain class "ExtendedUser" wich has info about users like: "name", "bio", "birthDate". Now I´m planning to do statistics about user´s age so I have created another controller "StatisticsController" and the idea is to store all the birthDates in a local array so I can manage multiple calculations with it
class StatisticsController {
// #Secured(["ROLE_COMPANY"])
def teststat(){
def user = ExtendedUser.findAll() //A list with all of the users
def emptyList = [] //AN empty list to store all the birthdates
def k = 0
while (k<=user.size()){
emptyList.add(user[k].birthDate) //Add a new birthdate to the emptyList (The Error)
k++
}
[age: user]
}
}
When I test, it shows me this error message: Cannot get property 'birthDate' on null object
So my question is how is the best way to store all the birthdates in an single array or list, so I can make calculations with it. Thank you
I prefer to .each() in groovy as much as possible. Read about groovy looping here.
For this try something like:
user.each() {
emptylist.push(it.birthdate) //'it' is the name of the default iterator created by the .each()
}
I don't have a grails environment set up on this computer so that is right off the top of my head without being tested but give it a shot.
I would use this approach:
def birthDates = ExtendedUser.findAll().collect { it.birthDate }
The collect method transforms each element of the collection and returns the transformed collection. In this case, users are being transformed into their birth dates.
Can you try:
List dates = ExtendedUser.findAll().birthDate

How to make a UUID in DynamoDB?

In my db scheme, I need a autoincrement primary key. How I can realize this feature?
PS For access to DynamoDB, I use dynode, module for Node.js.
Disclaimer: I am the maintainer of the Dynamodb-mapper project
Intuitive workflow of an auto-increment key:
get the last counter position
add 1
use the new number as the index of the object
save the new counter value
save the object
This is just to explain the underlying idea. Never do it this way because it's not atomic. Under certain workload, you may allocate the same ID to 2+ different objects because it's not atomic. This would result in a data loss.
The solution is to use the atomic ADD operation along with ALL_NEW of UpdateItem:
atomically generate an ID
use the new number as the index of the object
save the object
In the worst case scenario, the application crashes before the object is saved but never risk to allocate the same ID twice.
There is one remaining problem: where to store the last ID value ? We chose:
{
"hash_key"=-1, #0 was judged too risky as it is the default value for integers.
"__max_hash_key__y"=N
}
Of course, to work reliably, all applications inserting data MUST be aware of this system otherwise you might (again) overwrite data.
the last step is to automate the process. For example:
When hash_key is 0:
atomically_allocate_ID()
actual_save()
For implementation details (Python, sorry), see https://bitbucket.org/Ludia/dynamodb-mapper/src/8173d0e8b55d/dynamodb_mapper/model.py#cl-67
To tell you the truth, my company does not use it in production because, most of the time it is better to find another key like, for the user, an ID, for a transaction, a datetime, ...
I wrote some examples in dynamodb-mapper's documentation and it can easily be extrapolate to Node.JS
If you have any question, feel free to ask.
Another approach is to use a UUID generator for primary keys, as these are highly unlikely to clash.
IMO you are more likely to experience errors consolidating primary key counters across highly available DynamoDB tables than from clashes in generated UUIDs.
For example, in Node:
npm install uuid
var uuid = require('uuid');
// Generate a v1 (time-based) id
uuid.v1(); // -> '6c84fb90-12c4-11e1-840d-7b25c5ee775a'
// Generate a v4 (random) id
uuid.v4(); // -> '110ec58a-a0f2-4ac4-8393-c866d813b8d1'
Taken from SO answer.
If you're okay with gaps in your incrementing id, and you're okay with it only roughly corresponding to the order in which the rows were added, you can roll your own: Create a separate table called NextIdTable, with one primary key (numeric), call it Counter.
Each time you want to generate a new id, you would do the following:
Do a GetItem on NextIdTable to read the current value of Counter --> curValue
Do a PutItem on NextIdTable to set the value of Counter to curValue + 1. Make this a conditional PutItem so that it will fail if the value of Counter has changed.
If that conditional PutItem failed, it means someone else was doing this at the same time as you were. Start over.
If it succeeded, then curValue is your new unique ID.
Of course, if your process crashes before actually applying that ID anywhere, you'll "leak" it and have a gap in your sequence of IDs. And if you're doing this concurrently with some other process, one of you will get value 39 and one of you will get value 40, and there are no guarantees about which order they will actually be applied in your data table; the guy who got 40 might write it before the guy who got 39. But it does give you a rough ordering.
Parameters for a conditional PutItem in node.js are detailed here. http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html#!AWS/DynamoDB.html. If you had previously read a value of 38 from Counter, your conditional PutItem request might look like this.
var conditionalPutParams = {
TableName: 'NextIdTable',
Item: {
Counter: {
N: '39'
}
},
Expected: {
Counter: {
AttributeValueList: [
{
N: '38'
}
],
ComparisonOperator: 'EQ'
}
}
};
For those coding in Java, DynamoDBMapper can now generate unique UUIDs on your behalf.
DynamoDBAutoGeneratedKey
Marks a partition key or sort key property as being auto-generated.
DynamoDBMapper will generate a random UUID when saving these
attributes. Only String properties can be marked as auto-generated
keys.
Use the DynamoDBAutoGeneratedKey annotation like this
#DynamoDBTable(tableName="AutoGeneratedKeysExample")
public class AutoGeneratedKeys {
private String id;
#DynamoDBHashKey(attributeName = "Id")
#DynamoDBAutoGeneratedKey
public String getId() { return id; }
public void setId(String id) { this.id = id; }
As you can see in the example above, you can apply both the DynamoDBAutoGeneratedKey and DynamoDBHashKey annotation to the same attribute to generate a unique hash key.
Addition to #yadutaf's answer
AWS supports Atomic Counters.
Create a separate table (order_id) with a row holding the latest order_number:
+----+--------------+
| id | order_number |
+----+--------------+
| 0 | 5000 |
+----+--------------+
This will allow to increment order_number by 1 and get the incremented result in a callback from AWS DynamoDB:
config={
region: 'us-east-1',
endpoint: "http://localhost:8000"
};
const docClient = new AWS.DynamoDB.DocumentClient(config);
let param = {
TableName: 'order_id',
Key: {
"id": 0
},
UpdateExpression: "set order_number = order_number + :val",
ExpressionAttributeValues:{
":val": 1
},
ReturnValues: "UPDATED_NEW"
};
docClient.update(params, function(err, data) {
if (err) {
console.log("Unable to update the table. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log(data);
console.log(data.Attributes.order_number); // <= here is our incremented result
}
});
🛈 Be aware that in some rare cases their might be problems with the connection between your caller point and AWS API. It will result in the dynamodb row being incremented, while you will get a connection error. Thus, there might appear some unused incremented values.
You can use incremented data.Attributes.order_number in your table, e.g. to insert {id: data.Attributes.order_number, otherfields:{}} into order table.
I don't believe it is possible to to a SQL style auto-increment because the tables are partitioned across multiple machines. I generate my own UUID in PHP which does the job, I'm sure you could come up with something similar like this in javascript.
I've had the same problem and created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API. The service is free.
Auto Increment is not good from performance perspective as it will overload specific shards while keeping others idle, It doesn't make even distribution if you're storing data to Dynamodb.
awsRequestId looks like its actually V.4 UUID (Random), code snippet below to try it:
exports.handler = function(event, context, callback) {
console.log('remaining time =', context.getRemainingTimeInMillis());
console.log('functionName =', context.functionName);
console.log('AWSrequestID =', context.awsRequestId);
callback(null, context.functionName);
};
In case you want to generate this yourself, you can use https://www.npmjs.com/package/uuid or Ulide to generate different versions of UUID based on RFC-4122
V1 (timestamp based)
V3 (Namespace)
V4 (Random)
For Go developers, you can use these packages from Google's UUID, Pborman, or Satori. Pborman is better in performance, check these articles and benchmarks for more details.
More Info on Universal Unique Identifier Specification could be found here.
Create the new file.js and put this code:
exports.guid = function () {
function _p8(s) {
var p = (Math.random().toString(16)+"000000000").substr(2,8);
return s ? "-" + p.substr(0,4) + "-" + p.substr(4,4) : p ;
}
return (_p8() + _p8(true) + _p8(true)+new Date().toISOString().slice(0,10)).replace(/-/g,"");
}
Then you can apply this function to the primary key id. It will generate the UUID.
Incase you are using NoSQL DynamoDB then using Dynamoose ORM, you can easily set default unique id. Here is the simple user creation example
// User.modal.js
const dynamoose = require("dynamoose");
const userSchema = new dynamoose.Schema(
{
id: {
type: String,
hashKey: true,
},
displayName: String,
firstName: String,
lastName: String,
},
{ timestamps: true },
);
const User = dynamoose.model("User", userSchema);
module.exports = User;
// User.controller.js
const { v4: uuidv4 } = require("uuid");
const User = require("./user.model");
exports.create = async (req, res) => {
const user = new User({ id: uuidv4(), ...req.body }); // set unique id
const [err, response] = await to(user.save());
if (err) {
return badRes(res, err);
}
return goodRes(res, reponse);
};
Instead of using UUID use KSUID for ids. Naturally ordered by generation time.
https://www.npmjs.com/package/ksuid?activeTab=readme

Salesforce Custom Object Relationship Creation

I want to create two objects and link them via a parent child relationship in C# using the Metadata API.
I can create objects and 'custom' fields for the objects via the metadata, but the service just ignores the field def for the relationship.
By snipet for the fields are as follows:
CustomField[] fields = new CustomField[] { new CustomField()
{
type = FieldType.Text,
label = "FirstName",
length = 50,
lengthSpecified = true,
fullName = "LJUTestObject__c.FirstName__c"
},
new CustomField()
{
type = FieldType.Text,
label = "LastName",
length = 50,
lengthSpecified = true,
fullName = "LJUTestObject__c.Lastname__c"
},
new CustomField()
{
type = FieldType.Text,
label = "Postcode",
length = 50,
lengthSpecified = true,
fullName = "LJUTestChildObject__c.Postcode__c"
},
new CustomField()
{
type = FieldType.MasterDetail,
relationshipLabel = "PostcodeLookup",
relationshipName = "LJUTestObject__c.LJUTestObject_Id__c",
relationshipOrder = 0,
relationshipOrderSpecified = true,
fullName = "LJUTestChildObject__c.Lookup__r"
}
};
The parent object looks like:
LJUTestObject
ID,
FirstName, Text(50)
LastName, Text(50)
The child objext looks like:
LJUTestChildObject
ID,
Postcode, Text(50)
I want to link the parent to the child so one "LJUTestObject", can have many "LJUTestChildObjects".
What values do I need for FieldType, RelationshipName, and RelationshipOrder to make this happen?
TL;DR:
Use this as a template for accomplishing what you want:
var cf = new CustomField();
cf.fullName = "ChildCustomObject__c.ParentCustomField__c";
cf.type = FieldType.MasterDetail;
cf.typeSpecified = true;
cf.label = "Parent Or Whatever You Want This To Be Called In The UI";
cf.referenceTo = "ParentCustomObject__c";
cf.relationshipName = "ParentOrWhateverYouWantThisToBeCalledInternally";
cf.relationshipLabel = "This is an optional label";
var aUpsertResponse = smc.upsertMetadata(metadataSession, null, null, new Metadata[] { cf });
The key difference:
The natural temptation is to put the CustomField instances into the fields array of a CustomObject, and pass that CustomObject to the Salesforce Metadata API. And this does work for most data fields, but it seems that it does not work for relationship fields.
Instead, pass the CustomField directly to the Salesforce Metadata API, not wrapped in a CustomObject.
Those muted errors:
Turns out that errors are occurring, and the Salesforce Metadata API knows about them, but doesn't bother telling you about them when they occur for CustomFields nested inside a CustomObject.
By passing the CustomField directly to the Metadata API (not wrapped in a CustomObject), the call to upsertMetadata will still return without an exception being thrown (as it was already doing for you), but this time, if something goes wrong, upsertResponse[0].success will be false instead of true, and upsertResponse[0].errors will give you more information.
Other gotchas
Must specify referenceTo, and if it doesn't match the name of an existing built-in or custom object, the error message will be the same as if you had not specified referenceTo at all.
fullName should end in __c not __r. __r is for relationship names, but remember that fullName is specifying the field name, not the relationship name.
relationshipName - I got it working by not including __r on the end, and not including the custom object name at the start. I haven't tested to be sure other ways don't work, but be aware that at the very least, you don't need to have those extra components in the relationshipName.
Remember generally that anything with label in its name is probably for display to users in the UI, and thus can have spaces in it to be nicely formatted the way users expect.
Salesforce... really???
(mini rant warning)
The Salesforce Metadata API is unintuitive and poorly documented. That's why you got stuck on such a simple thing. That's why no-one knew the answer to your question. That's why, four years later, I got stuck on the same thing. Creating relationships is one of the main things you would want to do with the Salesforce Metadata API, and yet it has been this difficult to figure out, for this long. C'mon Salesforce, we know you're a sales company more than a tech company, but you earn trazillions of dollars and are happy to show it off - invest a little more in a better API experience for the developers who invest in learning your platform.
I've not created these through the meta data API like this myself, but I'd suggest that:
relationshipName = "LJUTestObject__c.LJUTestObject_Id__c
Should be:
relationshipName = "LJUTestObject__c.Id
as Id is a standard field, the __c suffix is only used for custom fields (not standard fields on custom objects). Also, it may be that the relationship full name should end in __c not __r, but try the change above first and see how you go.
SELECT
Id,
OwnerId,
WhatId,
Reminder_Date_Time__c,
WhoId,
Record_Type_Name__c,
Task_Type__c,
Assigned_Date__c,
Task_Status__c,
ActivityDate,
Subject,
Attended_By__c,
Is_Assigned__c
FROM Task
WHERE
(NOT Task_Status__c LIKE 'Open') AND
ActivityDate >= 2017-12-13 AND
(NOT Service__r.Service_State__c LIKE 'Karnataka')

Resources