ServiceStack.OrmLite returning "empty records" - sql-server

I´m starting with ServiceStack and using OrmLite to access my database. I used the Northwind example that comes bundled and modified it to access a SqlServer Database.
I changed the name of the table (Customer to Client) and the POCO class (Customer.cs) attributes so they match the correct ones in my table. When the request is made the returned data consist on a array containing N empty objects being N the number of records on the desired table.
If I add/remove records to the table this action is reflected on the returned data. So, OrmLite is querying the table but I can´t understand why my records are not populated.
The original json output:
{
Customers: [
{Id:"...", CompanyName:"...", },
{Id:"...", CompanyName:"...", },
{Id:"...", CompanyName:"...", }
],
ResponseStatus: {...}
}
After modification, I'm receiving:
{
Clients: [
{},
{},
{}
],
ResponseStatus: {}
}
Note the array with the N empty objects as value of the Clients key.

Related

Azure Data Factory - converting lookup result array

I'm pretty new to Acure Data Factory - ADF and have stumbled into somthing I would have solved with a couple lines of code.
Background
Main flow:
Lookup Activity fetchin an array of ID's to process
ForEach Activity looping over input array and uisng a Copy Activity pulling data from a REST API storing it into a database
Step #1 would result in an array containing ID's
{
"count": 10000,
"value": [
{
"id": "799128160"
},
{
"id": "817379102"
},
{
"id": "859061172"
},
... many more...
Step #2 When the lookup returns a lot of ID's - individual REST calls takes a lot of time. The REST API supports batching ID's using a comma spearated input.
The question
How can I convert the array from the input into a new array with comma separated fields? This will reduce the number of Activities and reduce the time to run.
Expecting something like this;
{
"count": 1000,
"value": [
{
"ids": "799128160,817379102,859061172,...."
},
{
"ids": "n,n,n,n,n,n,n,n,n,n,n,n,...."
}
... many more...
EDIT 1 - 19th Des 22
Using "Until Activity" and keeping track of posistions, I managed to use plain ADF. Would be nice if this could have been done using some simple array manipulation in a code snippet.
The ideal response might be we have to do manipulation with Dataflow -
My sample input:
First, I took a Dataflow In that adding a key Generate (Surrogate key) after the source - Say new key field is 'SrcKey'
Data preview of Surrogate key 1
Add an aggregate where you group by mod(SrcKey/3). This will group similar remainders into the same bucket.
Add a collect column in the same aggregator to collect into an array with expression trim(toString(collect(id)),'[]').
Data preview of Aggregate 1
Store output in single file in blob storage.
OUTPUT

Mongoose - Can't explain population

I'm doing a certain query, and I want to get the executionTime of it (including the popualtion):
const managerId = "023492745"
const company = await Companies.find({
_id: "1234"
})
.populate(
{
path: "employees",
match: {
_id: { $ne: managerId },
},
})
.explain()
I try to use explain() on the query, but all It only retrieves information about the find() part and not about the populate() part. How can I get the executionTime of the whole query?
explain is a command executed by the mongodb server, while populate is a function executed on the client side by mongoose.
The populate function works by receiving the results of the find from the server, then submitting additional queries to retrieve the corresponding data to place in each document.
The response to the explain command does not contain the found documents, only the statistics and metadata about the query, so there is nothing for populate to operate on.
Instead of explain, you might try increasing the log verbosity or enabling profiling on the mongod server to capture the subsequent queries.

How can I generate a DB, that fits my room scheme?

I have a database with quite a lot of Entities and I want to preload data from a file, on first creation of the database. For that the scheme of Room needs to fit the scheme of the database file. Since converting the json scheme by hand to SQLite statements is very error-prone ( I would need to copy paste every single of the statements and exchange the variable names) I am looking for a possibility to automatically generate a database from the scheme, that I then just need to fill with the data.
However apparently there´s no information if that is possible or even how to do so, out in the internet. It´s my first time working with SQLite (normally I use MySQL) and also the first time I see a database scheme in json. (Since standard MariaDB export options always just export the CREATE TABLE statements.)
Is there a way? Or does Room provide anyway to actually get the create table statements as a proper text, not split up in tons of JSON arrays?
I followed the guide on Android Developer Guidelines to get the json-scheme, so I have that file already. For those, who do not know it´s structure, it looks like this:
{
"formatVersion": 1,
"database": {
"version": 1,
"identityHash": "someAwesomeHash",
"entities": [
{
"tableName": "Articles",
"createSql": "CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`id` INTEGER NOT NULL, `germanArticle` TEXT NOT NULL, `frenchArticle` TEXT, PRIMARY KEY(`id`))",
"fields": [
{
"fieldPath": "id",
"columnName": "id",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "germanArticle",
"columnName": "germanArticle",
"affinity": "TEXT",
"notNull": true
},
{
"fieldPath": "frenchArticle",
"columnName": "frenchArticle",
"affinity": "TEXT",
"notNull": false
}
],
"primaryKey": {
"columnNames": [
"id"
],
"autoGenerate": false
},
"indices": [
{
"name": "index_Articles_germanArticle",
"unique": true,
"columnNames": [
"germanArticle"
],
"createSql": "CREATE UNIQUE INDEX IF NOT EXISTS `index_Articles_germanArticle` ON `${TABLE_NAME}` (`germanArticle`)"
},
{
"name": "index_Articles_frenchArticle",
"unique": true,
"columnNames": [
"frenchArticle"
],
"createSql": "CREATE UNIQUE INDEX IF NOT EXISTS `index_Articles_frenchArticle` ON `${TABLE_NAME}` (`frenchArticle`)"
}
],
"foreignKeys": []
},
...
Note: My question was not, how to create the Room DB out of the scheme. To receive the scheme, I already had to create all the Entities and the database. But how to get the structure Room creates as SQL to prepopulate my Database. However, I think the answer is a really nice explanation, and in fact I found the SQL-Statements I was searching for in the generated Java-file, which was an awesome hint. ;)
Is there a way? Or does Room provide anyway to actually get the create table statements as a proper text, not split up in tons of JSON arrays?
You cannot simply provide the CREATE SQL for Room, what you need to do is to generate the java/Kotlin classes (Entities) from the JSON and then add those classes to the project.
native SQLite (i.e. not using Room) would be a different matter as it could do done at runtime.
The way Room works is that the database is generated from the classes annotated with #Entity (at compile time).
The Entity/classes have to exist for the compile to correctly generate the code that it generates.
Furthermore the Entity(ies) have to be incorporated/included into a class for the Database, that being annotated with #Database (this class is typically abstract).
Yet furthermore to access the database tables you have abstract classes or interfaces for the SQL each being annotated with #Dao and again these require the Entity classes as the SQL is checked at compile time.
e.g. the JSON you provided would equate to something like :-
#Entity(
indices = {
#Index(value = "germanArticle", name = "index_Articles_germanArticle", unique = true),
#Index(value = "frenchArticle", name = "index_Articles_frenchArticle", unique = true)
}
, primaryKeys = {"id"}
)
public class Articles {
//#PrimaryKey // Could use this as an alternative
long id;
#NonNull
String germanArticle;
String frenchArticle;
}
so your process would have to convert the JSON to create the above and which could then be copied into the project.
You would then need a Class for the database which could be for example :-
#Database(entities = {Articles.class},version = 1)
abstract class MyDatabase extends RoomDatabase {
}
Note that Dao classes would be added to body of the above along the lines of :-
abstract MyDaoClass getDao();
Or does Room provide anyway to actually get the create table statements as a proper text, not split up in tons of JSON arrays?
Yes it does ....
At this stage if you compile it generates java (MyDatabase_Impl for the above i.e. the name of the Database class suffixed with _Impl). However as there are no Dao classes/interfaces. The database would unusable from a Room perspective (and thus wouldn't even get created).
part of the code generated would be :-
#Override
public void createAllTables(SupportSQLiteDatabase _db) {
_db.execSQL("CREATE TABLE IF NOT EXISTS `Articles` (`id` INTEGER NOT NULL, `germanArticle` TEXT NOT NULL, `frenchArticle` TEXT, PRIMARY KEY(`id`))");
_db.execSQL("CREATE UNIQUE INDEX IF NOT EXISTS `index_Articles_germanArticle` ON `Articles` (`germanArticle`)");
_db.execSQL("CREATE UNIQUE INDEX IF NOT EXISTS `index_Articles_frenchArticle` ON `Articles` (`frenchArticle`)");
_db.execSQL("CREATE TABLE IF NOT EXISTS room_master_table (id INTEGER PRIMARY KEY,identity_hash TEXT)");
_db.execSQL("INSERT OR REPLACE INTO room_master_table (id,identity_hash) VALUES(42, 'f7294cddfc3c1bc56a99e772f0c5b9bb')");
}
As you can see the Articles table and the two indices are created, the room_master_table is used for validation checking.

Azure Stream Analytics–Querying JSON Arrays of arrays

I have a problem writing a query to extract a table out of the arrays from a json file:
The problem is how to get the information of the array “data packets” and its contents of arrays and then make them all in a normal sql table.
One hard issue there is the "CrashNotification" and "CrashMaxModuleAccelerations", I dont know how to define and use them.
The file looks like this:
{ "imei": { "imei": "351631044527130F", "imeiNotEncoded":
"351631044527130"
},
"dataPackets": [ [ "CrashNotification", { "version": 1, "id": 28 } ], [
"CrashMaxModuleAccelerations", { "version": 1, "module": [ -1243, -626,
14048 ] } ] ]}
I tried to use Get array elements method and other ways but I am never able to access 2nd level arrays like elements of "CrashNotification" of the "dataPackets" or elements of "module" of the array "CrashMaxModuleAccelerations" of the "dataPackets".
I looked also here (Select the first element in a JSON array in Microsoft stream analytics query) and it doesnt work.
I would appreciate any help :)
Based on your schema, here's an example of query that will extract a table with the following columns: emei, crashNotification_version, crashNotification_id
WITH Datapackets AS
(
SELECT imei.imei as imei,
GetArrayElement(Datapackets, 0) as CrashNotification
FROM input
)
SELECT
imei,
GetRecordPropertyValue (GetArrayElement(CrashNotification, 1), 'version') as crashNotification_version,
GetRecordPropertyValue (GetArrayElement(CrashNotification, 1), 'id') as crashNotification_id
FROM Datapackets
Let me know if you have any further question.
Thanks,
JS (Azure Stream Analytics)
We built a HTTP API called Stride for converting streaming JSON data into realtime, incrementally updated tables using only SQL.
All you'd need to do is write raw JSON data to the Stride API's /collect endpoint, define continuous SQL queries via the /process endpoint, and then push or pull data via the /analyze endpoint.
This approach eliminates the need to deal with any underlying data infrastructure and gives you a SQL-based approach to this type of streaming analytics problem.

Is there a built-in function to get all unique values in an array field, across all records?

My schema looks like this:
var ArticleSchema = new Schema({
...
category: [{
type: String,
default: ['general']
}],
...
});
I want to parse through all records and find all unique values for this field across all records. This will be sent to the front-end via being called by service for look-ahead search on tagging articles.
We can iterate through every single record and run go through each array value and do a check, but this would be O(n2).
Is there an existing function or another way that has better performance?
You can use the distinct function to get the unique values across all category array fields of all documents:
Article.distinct('category', function(err, categories) {
// categories is an array of the unique category values
});
Put an index on category for best performance.

Resources