So I loaded a bunch of NHD data, and the geometry ended up as MultiPolygonZM (and pointZM and areaZM for other tables)
way geometry(MultiPolygonZM,900913)
I've tested the query and its returning data when run against the db directly. Here's my style:
<Style name="waterways">
<Rule>
<LineSymbolizer stroke="blue" stroke-width="3" />
</Rule>
</Style>
<Layer name="waterways" status="on">
<StyleName>waterways</StyleName>
<Datasource>
<Parameter name="table">
(select way
from nhd_waterbody)
as waterway
</Parameter>
<Parameter name="type">postgis</Parameter>
<Parameter name="port">5432</Parameter>
<Parameter name="user">gisuser</Parameter>
<Parameter name="dbname">gis</Parameter>
<Parameter name="estimate_extent">false</Parameter>
<Parameter name="extent">-20037508,-19929239,20037508,19929239</Parameter>
</Datasource>
</Layer>
But I can't get mapnik (version 2.10) to render it. The osm data is rendering just fine (its standard MultiPolygon, not 4d) from mapnik and qgis (v1.8) map all of it just hunky dory. Has anyone else experienced anything like this? Is it a geometry problem or is that just a red herring? Is there anyway to get mapnik to spit out any type of debug info when rendering?
TIA!
-- Randy
Several GIS programs, such as QGIS, internally use ST_Force_2D to make a 2D drawing from higher-dimension data types. I'm not sure how Mapnik treats these geometries, but I suspect they might not be supported. Also, be sure to double-check the extent, since this is often overlooked.
If you are not actually using the higher dimensions, then remove them! For PostGIS 2.0:
ALTER TABLE my_table
ALTER COLUMN way TYPE geometry(MultiPolygon,900913) USING ST_Force_2D(way);
And for PostGIS 1.x, see this answer.
Related
The IBM Watson Dialog API documentation on the following page refers to an entityRules node for expert dialog designers to extract the system-programmed entities but does not say anything else about the node:
http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/dialog/reference_nodes.shtml#reference_entityRules
Is there more detailed documentation on how this node can be used?
You can use entities to create your own data type. So in the doc, we see the example
<entities>
<entity name="currency" entityExample="dollar" entityType="GENERIC">
<value name="USD" value="USD">
<grammar>
<item>dollar </item>
<item>buck</item>
</grammar>
</value>
<value name="EUR" value="EUR">
<grammar>
<item>euro</item>
<item>eur</item>
<item>european buck</item>
</grammar>
</value>
<entityRules></entityRules>
</entity>
</entities>
This "currency" entity has a couple of value types (USD and EUR) but it could be extended to have more rows with more examples of each value. We could also add more values (say YEN, AUD etc or Japanese Yen, Australia Dollar etc).
The next thing would be to utilize the entity in a variation. So you could add a variation in an Input node, example:
I want to convert (currency) to (currency) tomorrow!
You can use any entities in a variation by simply including brackets around it.
You can also assign entity info into a profile variable so you can later access it and utilize it in your Dialog logic. Example variation:
I want to convert (currency)={CURRENCY1} to (currency)={CURRENCY2} tomorrow!
In this example, CURRENCY1 and CURRENCY2 are profile variables, that at run time, contain the entity match info.
Hope this helps.
Not wanting to clog up the question, I've left out most of the code but I can put it in if it helps.
using Breeze 1.4.9 and Breeze.angular v.0.9.0
I have a simple model: a ChartDefinition has a single DataQuery, and that DataQuery has some parameters.
I have a breeze query:
var query = breeze.EntityQuery
.from("ChartDefinitions")
.expand(["DataQuery","DataQuery.Parameters"]);
//.noTracking();
I can see the server's response (i've replaced most of the simple properties with '...'):
[{"$id":"1","$type":"itaprm4.Domain.ChartDefinition, itaprm4","Id":1,"Title":"FirstChart", ... ,
"DataQuery":
{"$id":"2","$type":"itaprm4.Domain.DataQuery, itaprm4","Id":1, ... ,
"Parameters":
[{"$id":"3","$type":"itaprm4.Domain.DataQueryParameter, itaprm4","Id":1, ...}]
}
}
,{"$id":"4","$type":"itaprm4.Domain.ChartDefinition, itaprm4","Id":2,"Title":"ProjectBudgets", ... ,
"DataQuery":
{"$id":"5","$type":"itaprm4.Domain.DataQuery, itaprm4","Id":2, ... ,
"Parameters":[]
}
},
{"$id":"6","$type":"itaprm4.Domain.ChartDefinition, itaprm4","Id":3,"Title":"ProjectActuals", ... ,
"DataQuery":
{"$id":"7","$type":"itaprm4.Domain.DataQuery, itaprm4","Id":3, ... ,
"Parameters":
[{"$id":"8","$type":"itaprm4.Domain.DataQueryParameter, itaprm4","Id":2,"DataQueryId":3, ...},
{"$id":"9","$type":"itaprm4.Domain.DataQueryParameter, itaprm4","Id":3,"DataQueryId":3, ...}
]
}
}]
After the entities have been materialised though, that last DataQuery object ($id:7) has a parameters array but, it only contains the last parameter ($id:9).
Digging around in breeze.debug I saw that noTracking causes the materialisation code down a different path so tacked the noTracking() option onto the query. This results in both the paramters appearing in the materlised Parameters array. (I'm assuming that since breeze can materialise the object graph correctly, there isn't anything wrong with the code on the server? so I haven't included it in this question...)
I would simply keep the noTracking option on but, I'm registering a constructor function with breeze and it doesn't get called if noTracking is on.
store.registerEntityTypeCtor('ChartDefinition', ChartDefinition);
Is there something else I need to do to get the parameters array filled without the noTracking option?
Edit:
Another observation : without the noTracking option, the DataQueryParameter with $id:8 actually ends up in the parameters array of the DataQuery with $id:5
Turns out this had a lot to do with what was on the server!
Our nHibernate set-up was using a different name for the DataQueryId property on the DataQuery class (the devs in the team tell me there were some issues with updating entities and doing this solved that issue):
<class name="DataQuery" table="sys_DataQuery" dynamic-update="true" >
<id name="Id" column="DataQueryId" type="int" unsaved-value="0">
<generator class="identity" />
</id>
...
<bag name="Parameters" cascade="all-delete-orphan">
<key column="DataQueryId"/>
<one-to-many class="DataQueryParameter"/>
</bag>
</class>
<class name="DataQueryParameter" table="sys_DataQueryParameter" dynamic-update="true" >
...
<property name="DataQueryId" type="int" not-null="true" insert="true" update="true" />
...
</class>
With matching identifiers in the class definitions.
Changing the Id to DataQueryId solved my problem:
<class name="DataQuery" table="sys_DataQuery" dynamic-update="true" >
<id name="DataQueryId" column="DataQueryId" type="int" unsaved-value="0">
<generator class="identity" />
</id>
...
This seems to make sense; how would breeze know to match DataQueryParamter.DataQueryId to DataQuery.Id but, I have no idea why Breeze could correctly materialise the object graph with noTracking switched on though?
Hi I am trying to render mine postgis data into Mapnik , but not being able to do the same, Can any one share with me the Python file for the same , which explanes how to do the same.
Manish Sharma
Google is your friend, but here's a quick sample using mapnik 2.1 python and xml styling:
Here's the python:
#!/usr/bin/python
import mapnik
from mapnik import Coord, Box2d
###
# Configuration
###
style = 'style.xml'
output = 'output.png'
width = 800
height = 800
bbox = Box2d(-11823891.0314,4847942.08196,-11774971.3333,4896861.78006)
print "Using mapnik version:", mapnik.mapnik_version()
map = mapnik.Map(width, height)
mapnik.load_map(map, style)
map.zoom_to_box(bbox)
mapnik.render_to_file(map, output)
And here's a simple style.xml using osm data:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE Map>
<Map background-color="#FFF">
<Style name="roads">
<Rule>
<LineSymbolizer stroke="red" stroke-width="1" />
</Rule>
</Style>
<Layer name="roads" status="on">
<StyleName>roads</StyleName>
<Datasource>
<Parameter name="table">
(select way from osm_line where highway is not null) as road
</Parameter>
<Parameter name="type">postgis</Parameter>
<Parameter name="port">5432</Parameter>
<Parameter name="user">gisuser</Parameter>
<Parameter name="dbname">gis</Parameter>
</Datasource>
</Layer>
</Map>
I have C# (WPF) application where I want to display a SSRS report in the ReportViewer control. The local report file has XML datasource embedded in it. The report is displayed correctly when running from SQL Server Business Intelligence Development Studio. But when I run with my app I get the following error:
A data source instance has not been supplied for the data source '...'.
So here is what I'm doing:
I have defined embedded XML data, as explained in this tutorial Defining a Report Dataset from Embedded XML Data. I have a data source called XmlDataSource_TopCustomers and a data set called XmlDataSet_TopCustomers, using that data source. I have referred the data set in a table and a chart. Overall, the RDL looks like this (just the essential, of course):
<Report xmlns="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition" xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner">
<Body>
<ReportItems>
<Tablix Name="Tablix1">
<DataSetName>XmlDataSet_TopCustomers</DataSetName>
</Tablix>
<Chart Name="Chart1">
<DataSetName>XmlDataSet_TopCustomers</DataSetName>
</Chart>
</ReportItems>
</Body>
<DataSources>
<DataSource Name="XmlDataSource_TopCustomers">
<ConnectionProperties>
<DataProvider>XML</DataProvider>
<ConnectString />
</ConnectionProperties>
<rd:SecurityType>None</rd:SecurityType>
<rd:DataSourceID>47833b52-231f-4634-8af4-3c63272b02a7</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="XmlDataSet_TopCustomers">
<Query>
<DataSourceName>XmlDataSource_TopCustomers</DataSourceName>
<CommandText><Query>
<ElementPath>Root /CustomerOrder {#CustomerNo, #CustomerName, #OrdersCount (Integer), #Total(Float), #AveragePerOrder(Float)}</ElementPath>
<XmlData>
<Root>
<CustomerOrder CustomerNo="10001" CustomerName="Name 1" OrdersCount="2" Total="5.446740000000000e+003" AveragePerOrder="2.723370000000000e+003" />
<CustomerOrder CustomerNo="10894" CustomerName="Name 2" OrdersCount="5" Total="3.334750000000000e+003" AveragePerOrder="6.669500000000001e+002" />
<CustomerOrder CustomerNo="12980" CustomerName="Name 3" OrdersCount="2" Total="2.003290000000000e+003" AveragePerOrder="1.001645000000000e+003" />
</Root>
</XmlData>
</Query></CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
<Fields>...
</DataSets>
<rd:ReportUnitType>Inch</rd:ReportUnitType>
<rd:ReportID>02172db8-2a1d-4c35-9555-b37ee6193544</rd:ReportID>
</Report>
At this point everything works fine from the IDE.
In my C# application, I have a ReportViewer and the following code:
Viewer.LocalReport.ReportPath = #"<actualpath>\TopCustomers.rdl"; // actual path is OK
Viewer.RefreshReport();
And then I get that
A data source instance has not been supplied for the data source 'XmlDataSet_TopCustomers'.
I've seen others having the same problem, but in most of the cases the problem is multiple datasources, which is not the case here, as you can see from the RDL snippet above.
Any suggestions?
The answer to my question can also be found here When to use RDLC over RDL reports? and here http://www.gotreportviewer.com/. It's basically this:
Unlike the Report Server the ReportViewer control does not connect to
databases or execute queries. Also, in local mode the only export
formats available are Excel, Word and PDF. (In remote mode all formats
supported by the Report Server are available.) The ReportViewer
control cannot be extended by adding custom renderers or custom report
items.
More information can be found here http://msdn.microsoft.com/en-us/library/ms252109(v=vs.80).aspx.
The ReportViewer control, which processes .rdlc files, ignores the
element of RDL. If a report definition contains a query, the
control will not process it.
and
When converting a .rdl file to .rdlc format, you must manually replace
the data source and query information in the report definition with
data constructs provided in your application
So you have to fetch the data explicitly and provided for the ReportViewer as a ReportDataSource having the exact same name as the dataset in the RDL file.
I have a small command line app that does something similar, but between defining the report path and doing anything with the report viewer I'm setting a data source for the report to be run against:
report.DataSources.Add(new ReportDataSource("DataSet_for_Distribution", table));
...table is a DataTable.
After that I have no problems programmatically calling the report Render method.
Can you set a break before the render and see what data sources the report actually has?
Another thing to try, and it may just be that you formatted (or stack formatted ) it to post it here, but when I embed an XML data set in a report it is all using a format like this:
<CommandText><Query>
<ElementPath>Root /S {#OrderDate (Date), #TotalDue (Decimal)} /C {#LastName} </ElementPath>
<XmlData>
<Root>
<S OrderDate="2003-07-01T00:00:00" SalesOrderNumber="SO51131" TotalDue="247913.9138">
<C FirstName="Shu" LastName="Ito" />
</S>
<S OrderDate="2003-10-01T00:00:00" SalesOrderNumber="SO55282" TotalDue="227737.7215">
<C FirstName="Shu" LastName="Ito" />
</S>
<S OrderDate="2002-07-01T00:00:00" SalesOrderNumber="SO46616" TotalDue="207058.3754">
<C FirstName="Jae" LastName="Pak" />
</S>
<S OrderDate="2002-08-01T00:00:00" SalesOrderNumber="SO46981" TotalDue="201490.4144">
<C FirstName="Ranjit" LastName="Varkey Chudukatil" />
</S>
<S OrderDate="2002-09-01T00:00:00" SalesOrderNumber="SO47395" TotalDue="198628.3054">
<C FirstName="Michael" LastName="Blythe" />
</S>
</Root>
</XmlData>
</Query></CommandText>
I am not sure from what you have stated if the data source has specified credentials.
This part here:
<ConnectionProperties>
<DataProvider>XML</DataProvider>
<ConnectString />
</ConnectionProperties>
Generally speaking with SQL data sources when reports fail to view for others or from applications it is due to the hosting server assuming a different credential than your IDE building the application. It does not know if my name is Brett, that my credentials are running it when calling it remotely. When you specify the credentials on the server hosting the report you can usually get around this. You go into the server hosting the report, I assume you are doing this as you have an 'rdl' report versus an rdlc report. Find the datasource, click properties, change setting to be 'use these credentials'. Supply credentials that you know work.
This may fix the issue. I am not certain with Sharepoint connections and XML connections but this is common with viewing issues with SQL Server connections.
We have a situation where we have to maybe create multiple instances of Solr/Tomcat running on different ports on either a single machine or several different machines. Towards doing this I was wondering if it's possible to specify the dataDir variable (within solrconfig.xml) using an environmentvariable for example like so: <dataDir>${envvar}/path/to/index</dataDir>.
As i'm working on a similar setup, i needed this too. I don't think it's good practise to use ENV variables for this. You are probably better off using the multicore setup or use a property file in solr.xml.
eg.
<core name="core_1" instanceDir="core_1" properties="core1.properties" />
and then in your core1.properties:
config.datadir=/datadir1
and then use that in your solrconfig.xml:
<dataDir>${config.datadir}</dataDir>
Cheers,
Patrick
Yes, you can do this, but you need to do jump through a couple hoops to set this up using system properties passed to the JVM when you start it.
Anywhere you want your environment variable to work in your configuration files, put the variable like this:
${VAR}
Then, when you start your JVM, pass it the variable by doing:
java -DVAR=$your-system-variable
So, to make this concrete, here's what we do:
java -DINSTALL_ROOT=$INSTALL_ROOT -jar -server start.jar
And our config has something like:
<filter class="solr.SynonymFilterFactory" synonyms=${INSTALL_ROOT}/Solr/conf/synonyms.txt />
Works like a charm.
Go multi-core .
You can tell Solr to deploy a particular index directory as a core. For example, to deploy a Solr index on path_to_instance_directory on http://localhost:8983/solr/coreX, you would do:
http://localhost:8983/solr/admin/cores?action=CREATE&name=coreX&instanceDir=path_to_instance_directory&config=config_file_name.xml&schema=schem_file_name.xml&dataDir=data
You can tell Solr to create, load, swap two running cores, swap a running core with an inactive core etc.
As explained on wiki.apache.org, you can use system property substitution in solrconfig.xml like below:
<dataDir>${data.dir}</dataDir>
Then you can specify values in a properties file:
#solrcore.properties
data.dir=/data/solrindex
Another way is to dictate the data directory during Solr runtime in this manner:
java -Dsolr.data.dir=/data/dir -jar start.jar
and in XML file use the following syntax:
<dataDir>${solr.data.dir:./solr/data}</dataDir>
I think the better method is to define solr.xml within your solr.home, e.g.:
<solr persistent="true" sharedLib="lib">
<cores adminPath="/admin/cores">
<core name="core0" instanceDir="core0" dataDir="/var/lib/solr/core0" />
<core name="core1" instanceDir="core1" dataDir="/var/lib/solr/core1" />
</cores>
</solr>
Note: I don't think you can use any external variables here.
Finally using JVM system property file (e.g. solr.xml) in conf/Catalina/localhost, for example:
<Context docBase="webapps/solr.war" crossContext="true">
<Environment name="solr/home" type="java.lang.String" value="/opt/solr/ads_solr" override="true" />
<Environment name="solr/data/dir" type="java.lang.String" value="/var/lib/solr" override="true" />
</Context>
where solr/home would work, however solr/data/dir won't work without patching your Solr.
See: tomcat_solr.xml.erb