I am playing with the HierarchyId columns in SQL Server in order to decide whether to use them in a new project.
I read some articles, watched some tutorials, I did some tests and the basic concepts are clear to me.
However, I have a problem inserting new records into the table.
Let's say I have such a structure:
- Metallica (/1/)
- Kill em all (/1/1/)
- Hit the lights (/1/1/1/)
- Whiplash (/1/1/3/)
- Iron Maiden (/2/)
- Killers (/2/1/)
- Megadeth (/3/)
Imagine I want to insert a new song (Motorbreath) under Kill'em all, but I want to sort the list in alphabetical order.
So, my final list should be like this:
- Metallica (/1/)
- Kill em all (/1/1/)
- Hit the lights (/1/1/1/)
- Motorbreath (/1/1/2/)
- Whiplash (/1/1/3/)
- Iron Maiden (/2/)
- Killers (/2/1/)
- Megadeth (/3/)
How can I achieve this?
In my Anylogic model I succesfully create plots of datasets that count the number of trucks arriving from terminals each hour in my simulation. Now, I want to add the actual/"observed" number of trucks arriving at a terminal, to compare my simulation to these numbers. I added these numbers in a database table (see picture below). Is there a simple way of adding this data to the plot?
I tried it by creating a variable that reads the database table for every hour and adding that to a dataset (like can be seen in the pictures below), but this did not work unfortunately (the plot was empty).
Maybe simply delete the variable and fill the dataset at the start of the model by looping through the dbase table data. Use the dbase query wizard to create a for-loop. Something like this should work:
int numEntries = (int) selectFrom(observed_arrivals).count();
DataSet myDataSet = new DataSet(numEntries);
List<Tuple> rows = selectFrom(observed_arrivals).list();
for (Tuple
row : rows) {
myDataSet.add(row.get( observed_arrivals.hour ), row.get( observed_arrivals.terminal_a ));
}
myChart.addDataSet(myDataSet);
You don't explain why it "didn't work" (what errors/problems did you get?), nor where you defined these elements.
(1) Since you want both observed (empirical) and simulated arrivals per terminal, datasets for each should be in the Terminal agent. And then the replicated plot (in Main) can have two data entries referring to data sets terminals(index).observedArrivals and terminals(index).simulatedArrivals or whatever you name them.
(2) Using getHourOfDay to add to the observed dataset is wrong because that just returns 0-23 (i.e., the hour in the current day for the current model date). Your database table looks like it has hours since model start, so you just want time(HOUR) to get the model time in elapsed hours (irrespective of what the model time unit is). Or possibly time(HOUR) - 1 if you only want to update the empirical arrivals for the hour at the end of that hour (i.e., at the same time that you updated the simulated arrivals).
(3) Using a Variable to get the database value each hour doesn't work because a variable's initial value is only evaluated once at model initialisation. You want an hourly cyclic Event in Terminal instead which adds the relevant row's value. (You need to use the Insert Database Query wizard to generate the relevant Java code for the query you need in the event's action.)
(4) Because you have a database table with specifically-named columns for each terminal (columns terminal_a and presumably terminal_b etc.) that makes it slightly more awkward. (This isn't proper relational table design where, instead of 4 columns for the 4 terminals, you'd instead have two columns for terminal_id and observed_value with a row for each time period and terminal combination.)
So your database query expression (in your Terminal agents) will need to use the SQL format (not the QueryDSL format) so that you can 'stitch in' the correct column name into the SQL.
I have a key column in two many to many related table and sample representation of data is -
(attaching sample version of the table to get the point across as there are other numerous columns not contributing to this visual)
table 1 -
table 2 -
I am making a line graph with date on x axis and the value1 and value 2 on y-axis. The value1 is true for all dates. It is basically a standard target value. Now I want all the value1 summed up to show in my visual as value1 and not just the ones for which I have data on those dates. To explain it better I want 1717 on the graph as well like the total in the table in the following image -
visual -
Is there a way to do this in power BI? I tried to make a shared dimension of all unique key as a separate table and connecting both the tables to that table but there is no change in visual due to that.
You can follow these below steps to achieve your required output-
Step-1 Create a custom column in your *table 1 as below-
value_1_sum =
CALCULATE(
SUM(table_2[value1]),
ALL(table_2)
)
Step-2 Configure your line chart as below. Remember, the aggregation for new custom column will be Average as shown in the image
And here below is the final output-
Additional Reference Here below is list of options you will get after right click on the measure name-
I would need your help with and SQL query that has to remove duplicate entries from a table, mostly using the datestamp column as a criteria in two passes.
Microsoft SQL DBMS is in question.
Here is a little more details:
Terminology: Module is basically a group of single machine workplaces onto which users operate.
Table:
ModNam column is fixed, there are 15 modules from M A01 to M A15, then goes the B row M B01 ... M B15 and so on until row F.
Pos column is irrelevant at the moment.
MdCod column represents a code of the machine being added to the position in the certain module. It can be replaced by another machine at any given time.
I have one query that will be inserting data into this table by copying entries from another table, every time a new machine is added to one of the positions.
Tricky part for me is a second query that should be comparing records in two phases and if:
1) Inside same module (first pass of the query represented with red color in the example pic attached):
ModNam value is the same, MdCod matches between the entries then the most recent datestamp decides the single one to stay and others duplicates get deleted
2) Inside other module (second pass of the query represented with purple color in the example pic attached):
ModNam values are different and MdCod matches between the entries then the most recent datestamp decides the single one to stay and others duplicates get deleted.
Please help and advise.
Example pic (updated):
Thank you all in advance.
1 Background
As the wireless communication technology is developing faster and faster, the Long Term Evolution networks are widely installed all over the word. Since the Long Term Evolution network is based on wide-band radio access technology, there is only one radio carrier available for most of the operator. Since the frequency resource is limited, many telecom operators use same frequency TD-LTE networks. In the same frequency TD-LTE networks, overlapping coverage seriously impact download speed. RF optimization is a conventional method of controlling overlapping coverage. However, the network optimization resources are always limited, we must evaluate the wireless network coverage and quality as a whole, and identify the problems in priority.
Drive test data is widely used to evaluate the wireless network coverage and quality. And we will future associate the Drive test data with the map. Traditionally, we use a desktop software MapInfo to analysis the Drive test data point with the street and road, which is a very heavy labor task. Recently, we find a new approach the PostGIS middleware, we could transfer the city maps into Raster data, we can we could future associate the Drive test data point with the city map raster data , just like we associate the Drive test data point with the polygon object.
In this article, we will describe how to associate the map points to Raster. and my question is how can we improve the speed if the function st_worldtorastercoordx(rast,geom) while there are more than 10000*10000 pixels in the raster object. This question is associated with the middle-ware PostGIS Raster object.
2 Creat Drive-test data point object
2.1 creat point object field
In previous object, we use the kettle middle ware to up-load the Drive-test data file into the Postgres-SQL database. In the database, the destination table is atu_sample, now we will update the table to add a geometry object “point “.
Here comes the command:
alter table atu_sample add column geom geometry(point,4326)
2.2 assign value to point
In the Drive-test data file, we have the longtitude and latitude to represent the spatial position, now we will convert them to a geometry object “point “
Here comes the command:
update atu_sample set geom =
st_geomfromtext('POINT('||longtitude||' '||latitude||')', 4326);
3 Creat Raster data
In order to associate the point object to Raster data, we must first set up a Raster Data. In this article, we will convert the city map which is a vector data into a Raster data.
3.1 Import Wuhan city Boundary polygon
3.1.1 Creat shape file
In this project, we will import the Wuhan city Boundary polygon into the PostGIS database, and Wuhan city is the capital city of Hubei Province, China. It depends on the MAPinfo format transfer function,the GUI entry can be seen in the following diagram:
From the picture we can see, the format of the target file is the ESRI Shape file , which is the default format which the PostGIS could support.
3.1.2 Creat PG scripts
Then we will transfer the Wuhan city map which is in the ESRI shape file into the PostGIS scripts. It depends on the tool shp2pgsql,which is a plug-in component of PostGIS.
Here comes the command :
shp2pgsql –W “GBK” -s 4326 wh_region.shp > 123.txt
here -s 4326 represent the coordinate system parameter, which will be valuable in future spatial object。
3.1.3 Script execution
Now, we will open the PGADMIN, and transfer the Script into the PostGIS database finally we get a talbe with the name wuhan。When the script is executed, we have the spatial data table in the postGIS。
3.2 Measuring the height and width of Wuhan boundary
3.2.1 Drawing ConvexHull of Wuhan city
enter image description here
Figure 2-4: ConvexHull of Wuhan city
In the figure 2-4, we draw the ConvexHull of Wuhan city, which will surround the Wuhan City boundary.
3.2.2 To measure height and width via ruler使用ruler工具
In the Mapinfo, we have a tool with the name Ruler, which can be used to measure the height and width of the ConvexHull of Wuhan city.
enter image description here
Figure 2-5 to measure height and width
Finally we get the data:132.7KM and 155.5KM.
3.3 Creat the Wuhan Raster data
In our project, we will creat the pixel of 100*100 meters. Therefore we will first calculate the parameter to divide the height and width of the ConvexHull of Wuhan city. According to the data which we have measured previously, the height 132.7KM and width 155.5KM, therefore we could calculate the equally divide parameter are 1327 and 1555.
3.3.1 Creat Raster Persistence table
Now we will creat a table which could storage the Raster data. Here comes the SQL command:
CREATE TABLE myrasters
(
rid integer NOT NULL,
rast raster,
CONSTRAINT myrasters_pkey PRIMARY KEY (rid)
);
3.3.2 Insert RASTER record
In the previous steps, we have the geomtry object form the table wh_region and the qually divide parameter, we could creat the raster instance with the function ST_AsRaster.
Here comes the command:
insert into myrasters SELECT gid,ST_AsRaster( geom ,1327, 1555, '8BUI') from wh_region;
4 Associa te the point to Raster
4.1 Add Drive test point RASTER coordinate data field
We will find out the relation of the point and Raster, that is to say we should find out which pixel a certain point is located in. in order to record this relation we first add two data field x integer and y integer to the table atu_sample.
Here comes the command:
alter table atu_sample add column x integer;
alter table atu_sample add column y integer;
4.2 Raster mapping
With the function st_worldtorastercoordx(rast,geom) and st_worldtorastercoordy(rast,geom), we could easily find out which pixel a certain point is located in and get the pixel coordinate data. Then we will update the tabe
Here comes the command:
update atu_sample set x =st_worldtorastercoordx(rast,geom) from myrasters;
update atu_sample set y =st_worldtorastercoordy(rast,geom) from myrasters;
Geom:
4.3 Performance problem of Raster mapping
In the above example , we have 12000 points and a 1327* 1555 raster data . it takes more than 160 Seconds to finish the task. In the future project, we will have more than 1200000 points , how could we fix the Performance problem?
For 3.3 please refer to this article. You don't have to measure the dimensions of your area to convert it to raster. Make sure you produce a tiled raster coverage with quite small tiles.
For 4: You don't have to compute the position of your points into the raster. Create points with ST_MakePoint() from your table of coordinates and make sure they are in the proper coordinate system. Then create a new table with the points projected into the same coordinate system as the raster table with ST_Transform() and index this table. Then query the pixel values with a query like this:
SELECT ST_Value(rast, geom) val
FROM rastertable, geomtable
WHERE ST_Intersects(rast, geom)