Cube Js | How to join two tables for measures and dimensions - cube

I want to Join two tables which is User and Organizations. I don't know how to connect another table which is Organizations where i have to write query for that.
*cube(`Users`, {
sql: `select * from users`,
joins: {
Organizations: {
relationship: `belongsTo`,
sql: `${Users}.organization_id = ${Organizations}.id`
}
},
}
});*

You have to create another schema for your another table name like below
cube(`Organizations`, {
sql: `select * from organizations`,
measures:{
count: {
type: `count`,
drillMembers: [id]
}
},
});
You just simply creating a new schema and write the joins whatever columns you want. After that use can easily access in both dimensions and measures.

Related

How to compare two columns not having same value using sequelize orm

I have two fields in my table dispatchCount & qty.
I want to findOne tuple where dispatchCount is not equal to qty
I want to do something similar to this (Mysql Select Rows Where two columns do not have the same value) but using sequelize ORM.
I don't want to write the raw query myself bcz there are a lot of aliases & things like that. So how can I do the following using sequelize
SELECT *
FROM my_table
WHERE column_a != column_b
The following is the solution without writing raw query to do so :-
let en = Entity.findOne({
where: {
dispatchCount : {
[Op.ne]: sequelize.col("qty");
}
}
})

Database Model Design With Laravel Eloquent

we have a problem to query our database in a meant-to-be fashion:
Tables:
employees <1-n> employee_card_validity <n-1> card <1-n> stamptimes
id id id id
employee_id no card_id
card_id timestamp
valid_from
valid_to
Employee is mapped onto Card via the EmployeeCardValidity Pivot which has additional attributes.
We reuse cards which means that a card has multiple entries in the pivot table. Which card is right is determined by valid_from/valid_to. These attributes are constrained not to overlap. Like that there's always a unique relationship from employee to stamptimes where an Employee can have multiple cards and a card can belong to multiple Employees over time.
Where we fail is to define a custom relationship from Employee to Stamptimes which regards which Stamptimes belong to an Employee. That means when I fetch a Stamptime its timestamp is distinctly assigned to a Card because it's inside its valid_from and valid_to.
But I cannot define an appropriate relation that gives me all Stamptimes for a given Employee. The only thing I have so far is to define a static field in Employee and use that to limit the relationship to only fetch Stamptimes of the given time.
public static $date = '';
public function cardsX() {
return $this->belongsToMany('App\Models\Tempos\Card', 'employee_card_validity',
'employee_id', 'card_id')
->wherePivot('valid_from', '>', self::$date);
}
Then I would say in the Controller:
\App\Models\Tempos\Employee::$date = '2020-01-20 00:00:00';
$ags = DepartmentGroup::with(['departments.employees.cardsX.stamptimes'])
But I cannot do that dynamically depending on the actual query result as you could with sql:
SELECT ecv.card_id, employee_id, valid_from, valid_to, s.timestamp
FROM staff.employee_card_validity ecv
join staff.stamptimes s on s.card_id = ecv.card_id
and s.stamptimes between valid_from and coalesce(valid_to , 'infinity'::timestamp)
where employee_id = ?
So my question is: is that database desing unusual or is an ORM mapper just not capable of describing such relationships. Do I have to fall back to QueryBuilder/SQL in such cases?
Do you suit your database model towards ORM or the other way?
You can try:
DB::query()->selectRaw('*')->from('employee_card_validity')
->join('stamptimes', function($join) {
return $join->on('employee_card_validity.card_id', '=', 'stamptimes.card_id')
->whereRaw('stamptimes.timestamp between employee_card_validity.valid_from and employee_card_validity.valid_to');
})->where('employee_id', ?)->get();
If your Laravel is x > 5.5, you can initiate Model extends the Pivot class I believe, so:
EmployeeCardValidity::join('stamptimes', function($join) {
return $join->on('employee_card_validity.card_id', '=', 'stamptimes.card_id')
->whereRaw('stamptimes.timestamp between employee_card_validity.valid_from and employee_card_validity.valid_to');
})->where('employee_id', ?)->get();
But code above is only translating your sql query, I believe I can write better if I know exactly your use cases.

Normalise table in Entity Framework Core and migrate existing data

Let's suppose I have a table called Surveys (SurveyId, ... , SubmittedDate, LastEditedDate)
It's full of data and I now realise I should normalise it to get audit data into its own table, so I create a table SurveyAudits (SurveyAuditId, SubmittedDate, LastEditedDate)
When I create the table, I want to populate it with the data from Surveys.
Then I need to add a foreign key to Surveys (SurveyAuditId) so each survey links to its SurveyAudit.
Finally, I can drop the redundant columns from Surveys (SubmittedDate, LastEditedDate)
What do I add to the Up method to achieve this?
I suspect my approach so far may be unsuitable, so please steer me onto the correct path if that is the case!
Code:
public partial class CreateSurveyAudit : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "SurveyAudits",
columns: table => new
{
SurveyAuditId = table.Column<int>(type: "int", nullable: false).Annotation("SqlServer:ValueGenerationStrategy",SqlServerValueGenerationStrategy.IdentityColumn),
SubmittedDate = table.Column<DateTime>(type: "datetime2", nullable: false),
LastEditedDate = table.Column<DateTime>(type: "datetime2", nullable: false)
},
//I could get the data into the new table like so, but I would not have the relationship:
migrationBuilder.Sql("INSERT INTO SurveyAudits(SubmittedDate, LastEditedDate)
SELECT SubmittedDate, LastEditedDate FROM Surveys")
//so perhaps I could add the foreign key column first
migrationBuilder.AddColumn<int>(...);
migrationBuilder.CreateIndex(...);
migrationBuilder.AddForeignKey(...);
//then something like...
foreach (var survey in context.Surveys) { //but how do I access context?
survey.Add(new SurveyAudit(
SubmittedDate = survey.SubmittedDate,
LastEditedDate = survey.LastEditedDate)
}
context.SaveChanges();
}
}
You need to create a SurveyId column on your SurveyAudits table to make a relationship
Then use the following:
migrationBuilder.Sql("INSERT INTO SurveyAudits(SurveyId, SubmittedDate, LastEditedDate)
SELECT SurveyId, SubmittedDate, LastEditedDate FROM Surveys")

OrientDB CRUD for large and nested data

I'm very new to OrientDB, I'm trying to create a structure to insert and retrieve large data with nested fields and I couldn't find proper solution or guideline.
This is the structure of table I want to create:
{
UID,
Name,
RecordID,
RecordData: [
{
RAddress,
ItemNo,
Description
},
{
RAddress,
ItemNo,
Description
},
{
RAddress,
ItemNo,
Description
}
....Too many records....
]
},
{
UID,
Name,
RecordID,
RecordData: [
{
RAddress,
ItemNo,
Description
},
{
RAddress,
ItemNo,
Description
},
{
RAddress,
ItemNo,
Description
}
....Too many records....
]
}
....Too many records....
Now, I want to retrieve Description field from table by querying ItemNo and RAddress in bulk.
For example, I have 50K(50000) pairs of UID or RecordID and ItemNo or RAddress, based on this data I want to retrieve Description field. I want to do is with the fastest possible way. So can any one please suggest me good query for this task?
I have 500M records in which most of the record contains 10-12 words each.
Can anyone suggest CRUD queries for it?
Thanks in advance.
You might want to create a single record using content as such:
INSERT INTO Test CONTENT {"UID": 0,"Name": "Test","RecordID": 0,"RecordData": {"RAddress": ["RAddress1", "RAddress2", "RAddress3"],"ItemNo": [1, 2, 3],"Description": ["Description1", "Description2", "Description3"]}}
That'll get you started with embedded values and JSON, however, if you want to do a bulk insert you should write a function, there are many ways to do so but if you want to stay on Studio, go for Function tab.
As for the retrieving part:
SELECT RecordData[Description] FROM Test WHERE (RecordData[ItemNo] CONTAINSTEXT "1") AND (RecordData[RAddress] CONTAINSTEXT "RAddress1")

Postgresql 9.5 JSONB nested arrays LIKE statement

I have a jsonb column, called "product", that contains a similar jsonb object as the one below. I'm trying to figure out how to do a LIKE statement against the same data in a postgresql 9.5.
{
"name":"Some Product",
"variants":[
{
"color":"blue",
"skus":[
{
"uom":"each",
"code":"ZZWG002NCHZ-65"
},
{
"uom":"case",
"code":"ZZWG002NCHZ-65-CASE"
},
]
}
]}
The following query works for exact match.
SELECT * FROM products WHERE product#> '{variants}' #> '[{"skus":[{"code":"ZZWG002NCHZ-65"}]}]';
But I need to support LIKE statements like "begins with", "ends width" and "contains". How would this be done?
Example: Lets say I want all products returned that have a sku code that begins with "ZZWG00".
You should unnest variants and skus (using jsonb_array_elements()), so you could examine sku->>'code':
SELECT DISTINCT p.*
FROM
products p,
jsonb_array_elements(product->'variants') as variants(variant),
jsonb_array_elements(variant->'skus') as skus(sku)
WHERE
sku->>'code' like 'ZZW%';
Use DISTINCT as you'll have multiple rows as a result of multiple matches in one product.

Resources