I'm writing a stored procedure in .NET to do some complex calculations that can't be written easily in pure MDX. The first problem I'm having is how to retrieve a set of data in a tabular form to pass to my calculation.
My code so far is written below. I would have thought that after we retrieve our value at position **1, we would have all the data in memory to interact with. However, it seems that at position **2, a Query Subcube is issued to the storage engine for each and every day in our range. This is devastating to performance.
Is there something I'm doing wrong? Is there another method I can call to evaluate the set all at once?
// First get the date range that we'd like to calculate over.
// (These values are constant here for example only)
DateTime date = new DateTime(2012, 4, 1);
int dateFrom = KeyFromDate(date.AddDays(-360));
int dateTo = KeyFromDate(date);
string dateRange = string.Format(
"[Date].[Date].&[{0}]:[Date].[Date].&[{1}]",
dateFrom,
dateTo
);
Expression expression = new Expression(dateRange + "*[Measures].[My Measure]");
MDXValue value = expression.CalculateMdxObject(null); // ***1
foreach (var tuple in value.ToSet().Tuples)
{
MDXValue tupleValue = MDXValue.FromTuple(tuple).ToInt32(); // ***2
}
Run SQL Profiler, connect to analysis services, on tab "event selection" check "show all events" and select "get data from aggredations", "get data from cache", "query subsube" and "query subcube verbose".
First read this document http://www.microsoft.com/en-us/download/details.aspx?id=17303 - see page 18 - in order to understand how "query subcube verbose" is working.
Then in Visual Studio (where you're debugging your procedure) in debug mode pass through line **1
and see in SQL Profiler what is queried in verbose - what measure group and what attributes.
Then pass through ***2 and see again in SQL Profiler in verbose events what is queried.
I believe that the set of attributes is different, so it may so happen that in **1 it uses some aggregate, and in place **2 when "value" is present in the tuple - there are no aggregate for this set of attributes, so instead of making "read from aggregations" once it makes "read from measure group cache" several times.
I can't tell more exactly cause I don't have your cube. Try to find this out by "query subcube verbose" events, and try to use BIDS Helper to create necessary aggregations manually (with specific set of attributes) - it may help.
Related
Using servicestack ormlite 6,4 and azure SQL server - using SQLServerDialect2012, we have an issue with an enums causing excessive stalling and timeouts.
If we just convert it to a string its quick as it should be.
var results = db.Select(q => q.SomeColumn == enum.value); -> 3,5 seconds
var results2 = db.Select(q => q.SomeColumn.tostring() == enum.value.tostring()); -> 0,08
we are using default settings so the enum in the db is defined as a varchar(255)
both queries give the same result.
to track the issue we wanted to see what its actually firing, but all we get is a query with some #1 #2 etc with no indication of what parameters where used or how they are defined.
All our attempts to get a 1:1 SQL string we can use to manually test the query and see the results have failed... mini profiler was the closest as it shows the parameter values...
but it does not contain the details necessary to recreate the used query and recreate the issue we have. (manually recreating the query gives 80ms as above)
Trying to get the execution plan with the query also fail.
db.ExecuteSql("SET STATISTICS PROFILE ON;");
var results = db.Select(q => q.SomeColumn == enum.value);
db.ExecuteSql("SET STATISTICS PROFILE OFF;");
only returns data, not any extra info i was hoping for.
I have not been able to find any sites or threads that explain how others get any kind of debug info.
What is the correct next step here?
OrmLite's Logging & Introspection page shows how you can view the SQL generated by OrmLite.
E.g. Configuring a debug logger should log the generated SQL and params:
LogManager.LogFactory = new ConsoleLogFactory(debugEnabled:true);
I could need some help with a Anylogic Model.
Model (short): Manufacturing scenario with orders move in a individual route. The workplaces (WP) are dynamical created by simulation start. Their names, quantity and other parameters are stored in a database (excel Import). Also the orders are created according to an import. The Agent population "order" has a collection routing which contains the Workplaces it has to stop in the specific order.
Target: I want a moveTo block in main which finds the next destination of the agent order.
Problem and solution paths:
I set the destination Type to agent and in the Agent field I typed a function agent.getDestination(). This function is in order which returns the next entry of the collection WP destinationName = routing.get(i). With this I get a Datatype error (while run not compiling). I quess it's because the database does not save the entrys as WP Type but only String.
Is there a possiblity to create a collection with agents from an Excel?
After this I tried to use the same getDestination as String an so find via findFirst the WP matching the returned name and return it as WP. WP targetWP = findFirst(wps, w->w.name == destinationName);
Of corse wps (the population of Workplaces) couldn't be found.
How can I search the population?
Maybe with an Agentlink?
I think it is not that difficult but can't find an answer or a solution. As you can tell I'm a beginner... Hope the description is good an someone can help me or give me a hint :)
Thanks
Is there a possiblity to create a collection with agents from an Excel?
Not directly using the collection's properties and, as you've seen, you can't have database (DB) column types which are agent types.1
But this is relatively simple to do directly via Java code (and you can use the Insert Database Query wizard to construct the skeleton code for you).
After this I tried to use the same getDestination as String an so find via findFirst the WP matching the returned name and return it as WP
Yes, this is one approach. If your order details are in Excel/the database, they are presumably referring to workplaces via some String ID (which will be a parameter of the workplace agents you've created from a separate Excel worksheet/database table). You need to use the Java equals method to compare strings though, not == (which is for comparing numbers or whether two objects are the same object).
I want a moveTo block in main which finds the next destination of the agent order
So the general overall solution is
Create a population of Workplace agents (let's say called workplaces in Main) from the DB, each with a String parameter id or similar mapped from a DB column.
Create a population of Order agents (let's say called orders in Main) from the DB and then, in their on-startup action, set up their collection of workplace IDs (type ArrayList, element class String; let's say called workplaceIDsList) using data from another DB table.
Order probably also needs a working variable storing the next index in the list that it needs to go to (so let's say an int variable nextWorkplaceIndex which starts at 0).
Write a function in Main called getWorkplaceByID that has a single String argument id and returns a Workplace. This gets the workplace from the population that matches the ID; a one-line way similar to yours is findFirst(workplaces, w -> w.id.equals(id)).
The MoveTo block (which I presume is in Main) needs to move the Order to an agent defined by getWorkplaceByID(agent.workplaceIDsList.get(nextWorkplaceIndex++)). (The ++ bit increments the index after evaluating the expression so it is ready for the next workplace to go to.)
For populating the collection, you'd have two tables, something like the below (assuming using strings as IDs for workplaces and orders):
orders table: columns for parameters of your orders (including some String id column) other than the workplace-list. (Create one Order agent per row.)
order_workplaces table: columns order_id, sequence_num and workplace_id (so with multiple rows specifying the sequence of workplace IDs for an order ID).
In the On startup action of Order, set up the skeleton query code via the Insert Database Query wizard as below (where we want to loop through all rows for this order's ID and do something --- we'll change the skeleton code to add entries to the collection instead of just printing stuff via traceln like the skeleton code does).
Then we edit the skeleton code to look like the below. (Note we add an orderBy clause to the initial query so we ensure we get the rows in ascending sequence number order.)
List<Tuple> rows = selectFrom(order_workplaces)
.where(order_workplaces.order_id.eq(id))
.orderBy(order_workplaces.sequence_num.asc())
.list();
for (Tuple row : rows) {
workplaceIDsList.add(row.get(order_workplaces.workplace_id));
}
1 The AnyLogic database is a normal relational database --- HSQLDB in fact --- and databases only understand their own specific data types like VARCHAR, with AnyLogic and the libraries it uses translating these to Java types like String. In the user interface, AnyLogic makes it look like you set the column types as int, String, etc. but these are really the Java types that the columns' contents will ultimately be translated into.
AnyLogic does support columns which have option list types (and the special Code type column for columns containing executable Java code) but these are special cases using special logic under the covers to translate the column data (which is ultimately still a string of characters) into the appropriate option list instance or (for Code columns) into compiled-on-the-fly-and-then-executed Java).
Welcome to Stack Overflow :) To create a Population via Excel Import you have to create a method and call Code like this. You also need an empty Population.
int n = excelFile.getLastRowNum(YOUR_SHEET_NAME);
for(int i = FIRST_ROW; i <= n; i++){
String name = excelFile.getCellStringValue(YOUR_SHEET_NAME, i, 1);
double SEC_PARAMETER_TO_READ= excelFile.getCellNumericValue(YOUR_SHEET_NAME, i, 2);
WP workplace = add_wps(name, SEC_PARAMETER_TO_READ);
}
Now if you want to get a workplace by name, you have to create a method similar to your try.
Functionbody:
WP workplaceToFind = wps.findFirst(w -> w.name.equals(destinationName));
if(workplaceToFind != null){
//do what ever you want
}
Given the following code:
var dbRecords = _context.Alerts.AsNoTracking()
.Where(a => a.OrganizationId == _authorization.OrganizationId)
.ToList();
var dbRecords2 = _context.Alerts
.Where(a => a.OrganizationId == _authorization.OrganizationId)
.ToList();
foreach (var untrackedRecord in dbRecords) {
var trackedRecord = dbRecords2.First(a => a.Id == untrackedRecord.Id);
Assert.AreEqual(untrackedRecord.TimeStamp.Ticks, trackedRecord.TimeStamp.Ticks);
}
Where the TimeStamp data is stored in SQL Server 2012 in a column defined as datetime2(0).
The Assert fails, and the debugger demonstrates that the two Ticks values are always different.
Expected: 636179928520000000 But was: 636179928523681935
The untracked value will always be rounded off to the nearest second (which is expected, based on what SQL is storing). When creating the record, the value I'm saving comes from DateTime.Now.
Testing some more, this doesn't appear to be true (the inconsistent ticks) for every object I'm testing, only for records I've inserted recently. Looking at the code and given the way the column is defined, it's not obvious to my why that would matter.
For now, to get my tests to pass, I'm just comparing the DateTime values down to the second, which is all that's required. However, I'm just wanting to understand why this is happening: Why can I not reliably compare two DateTime values depending on whether or not the entities are being tracked?
I figured this out, so answering my own question; I found I left off what turns out to be a key piece of information here. I mentioned that this issue came up in testing. What I didn't mention is that we're inserting the records and then testing all within a single transaction, and within a single DbContext.
Because I use the same DbContext for all work, the Alert objects that are inserted for testing are cached. When I query the objects using AsNoTracking, the DbContext has to refresh the objects before giving them back to me (since their current state isn't being tracked, and therefore is unknown to EF), apparently without updating what's in the cache (since we told EF we don't want to track the objects).
Querying for the same objects without AsNoTracking results in a cache hit; those objects that were inserted are still in the cache, so the cached versions are returned.
Given that, it's clear why the Ticks aren't matching up. The non-cached objects are pulling the DateTime values from the database, where the precision is defined to only store the time down the nearest second. The cached objects have the original DateTime.Now values, which stores the time down to ms. This explains why the Ticks don't match between the two DateTimes, even though both objects represent the same underlying database record.
I am trying to start using EF6 for a project. My database is already filled with millions of records.
I can't find right explanation how does EF send T-SQL to SQL Server? I am afraid that I am going to download bunch of data to user for no reason.
In code below I have found three way to get my data to List<> but I am not sure which is right way to do WHERE clause at SQL.
I do not want to fill client with millions of record and to query (filter) that data at client.
using (rgtBaza baza = new rgtBaza())
{
var t = baza.Database.SqlQuery<CJE_DOC>("select * from cje_doc where datum between #od and #do",new SqlParameter("od", this.dateTimePickerOD.Value.Date ) ,new SqlParameter("do", this.dateTimePickerOD.Value.Date)).ToList();
var t = baza.CJE_DOC.Where(s => s.DATUM.Value >= this.dateTimePickerOD.Value.Date && s.DATUM.Value <= this.dateTimePickerDO.Value.Date).ToList();
var query = from b in baza.CJE_DOC
where b.DATUM >= this.dateTimePickerOD.Value.Date && b.DATUM.Value <= this.dateTimePickerDO.Value.Date
select b;
var t = query.ToList();
this.dataGridViewCJENICI.DataSource = t;
}
In all 3 cases, the filtering will happen on the database side, the filtering (or WHERE clause) will not take place on the client side.
If you want to verify that this is true, especially for your last 2 options, add some logging so that you can see the generated SQL:
baza.Database.Log = s => Console.WriteLine(s);
In this case, since you are using EF already, choose the 2nd or 3rd options, they are both equivalent with different syntax. Pick your favorite syntax.
In all of those examples, EF6 will generate a SQL query including the where clause - it won't perform the where clause on the client.
It won't actually retrieve any data from the database until you iterate through the results, which in the examples above, is when you call .ToList().
EF6 would only run the filter on the client if you called something like:
baza.CJE_DOC.ToList().Where(x => x.Field == value)
In this instance, it would retrieve the entire table when you called ToList(), and then use a client-side Linq query to filter the results in the where clause.
Any of the 3 will run the query on the SQL Server.
EF relies on LINQ's deferred execution model to build up an expression tree. Once you take an action that causes the expression to be enumerated (e.g. calling ToList(), ToArray(), or any of the other To*() methods), it will convert the expression tree to SQL, send the query to the server, and then start returning the results.
One of the side effects of this is that when using the query or lambda syntax, expressions that EF does not understand how to convert to SQL will cause an exception.
If you absolutely need to use some code that EF can't handle, you can break your code into multiple segments -- filtering things down as far as possible via code that can be converted to SQL, using the AsEnumerable() method to "close off" the EF expression, and doing your remaining filtering or transformations using Linq to Objects.
I have a winforms app where I have a Telerik dropdownchecklist that lets the user select a group of state names.
Using EF and the database is stored in Azure SQL.
The code then hits a database of about 17,000 records and filters the results to only include states that are checked.
Works fine. I am wanting to update a count on the screen whenever they change the list box.
This is the code, in the itemCheckChanged event:
var states = stateDropDownList.CheckedItems.Select(i => i.Value.ToString()).ToList();
var filteredStops = (from stop in aDb.Stop_address_details where states.Contains(stop.Stop_state) select stop).ToArray();
ExportInfo_tb.Text = "Current Stop Count: " + filteredStops.Count();
It works, but it is slow.
I tried to load everything into a memory variable then querying that vs the database but can't seem to figure out how to do that.
Any suggestions?
Improvement:
I picked up a noticeable improvement by limiting the amount of data coming down by:
var filteredStops = (from stop in aDb.Stop_address_details where states.Contains(stop.Stop_state) select stop.Stop_state).ToList();
And better yet --
int count = (from stop in aDb.Stop_address_details where
states.Contains(stop.Stop_state)
select stop).Count();
ExportInfo_tb.Text = "Current Stop Count: " + count.ToString();
The performance of you query, actually, has nothing to do with Contiains, in this case. Contains is pretty performant. The problem, as you picked up on in your third solution, is that you are pulling far more data over the network than required.
In your first solution you are pulling back all of the rows from the server with the matching stop state and performing the count locally. This is the worst possible approach. You are pulling back data just to count it and you are pulling back far more data than you need.
In your second solution you limited the data coming back to a single field which is why the performance improved. This could have resulted in a significant improvement if your table is really wide. The problem with this is that you are still pulling back all the data just to count it locally.
In your third solution EF will translate the .Count() method into a query that performs the count for you. So the count will happen on the server and the only data returned is a single value; the result of count. Since network latency CAN often be (but is not always) the longest step when performing a query, returning less data can often result in significant gains in query speed.
The query translation of your final solution should look something like this:
SELECT COUNT(*) AS [value]
FROM [Stop_address_details] AS [t0]
WHERE [t0].[Stop_state] IN (#p0)