Geodjango GeoJSON Serializer geometry always 'null' - postgis

Hello friends of GeoDjango and the GeoJSON serlizer. I was following the official GeoDjango Tutorial:
https://docs.djangoproject.com/en/1.8/ref/contrib/gis/tutorial/
So in the end I have a PostgreSQL + PostGIS database full of countries of, their name, their iso3 code and so on. And especially their geometry in mpoly as MultiPolygon (stored in wkb). I want to retrieve the contries from the database using GeoDjango. I am truggling with that.
I can retrieve the properties of one object one after the other:
from django.http import HttpResponse
from django.shortcuts import render
from django.core.serializers import serialize
from AppName.models import WorldBorder
[...]
WorldBorder.objects.filter(name='Germany')[0].name # "Germany"
WorldBorder.objects.filter(name='Germany')[0].iso3 # "DEU"
WorldBorder.objects.filter(name='Germany')[0].mpoly.geojson # long & correct output
So the data is correctly stored in the database and I can retrieve the objects properties. Now I want to get a full geojson file for the country. Django has created the GeoJSON serializer for that:
https://docs.djangoproject.com/en/1.8/ref/contrib/gis/serializers/
If I use it in the described way:
serialize('geojson',
WorldBorder.objects.filter(name='Germany'),
geometry_field='mpoly',
fields=('name',)
)
I get this output:
u'{"type": "FeatureCollection", "crs":{"type": "name", "properties": {"name": "EPSG:4326"}},
"features": [{"geometry": null,"type": "Feature",
"properties":{"name": "Germany" }}]}'
what is driving me crazy is "geometry": null
So it serializes everything, but not the geometry. Why is that? What am I doing wrong? And especially? How do I get the geometry out of my PostGIS database in GeoJSON format using GeoDjango? Any help is appreciated.
Thank you :)

In case anyone else runs into this issue:
It seems the problem is in Django 1.8, geometry has to be passed in the fields for it be serialized.
More here https://code.djangoproject.com/ticket/26138

If anybody is still interested in the answer. After a Django update I could fix it by using the normal serializer from the django packages.
from django.core.serializers import serialize
and then serialized using the 'geojson' option:
serialize('geojson',
WorldBorder.objects.filter(name='Germany'),
geometry_field='geom',
fields=('id', 'name', 'other_properties_you_want')
and it worked like a charm! Except for the fact that the id did not get serialized.

Related

Azure logic apps: Nullable JSON values not available as dynamic content

I'm building a logic app that pulls some JSON data from a REST API, parses it with the Parse JSON block, and pushes it to Azure Log Analytics. The main problem I'm hitting is an important JSON field can either be an object or null. Per this post I changed the relevant part of my JSON schema to something like this
"entity": {"type": ["object", "null"] }
While this works, I'm now no longer to access entity later in the logic app as dynamic content. I can access all other fields parsed by the Parse JSON block downstream in the logic (that don't have nullable field). If I remove the "null" option and just have the type set to object, I can access entity in dynamic content once again. Does anyone know why this might be happening and/or how to access the entity field downstream?
Through the test, if we use "entity": {"type": ["object", "null"] }, we really cannot directly select entity in dynamic content.
But we can use the following expression to get the entity:
body('Parse_JSON')?['entity']
The test results seem to be no problem:
For a better understanding, let me cite a few more examples:
1. If your json is like this:
{
"entity": {
"testKey": "testValue"
}
}
Your expression is like this:
body('Parse_JSON')?['entity']
2. If your json is like this:
{
"test": {
"entity": {
"testKey": "testValue"
}
}
}
Your expression should like this:
body('Parse_JSON')?['test']?['entity']

Append sensor data into document in Couchbase database

We are a group of people writing a bachelor-project about storing sensor data into a noSQL-database, and we have chosen couchbase for this.
We want to store quite a few data in the same document, one document per day, per sensor, and we want to append new sensor data witch comes in every minute.
But unforunatly, we are not able to append new data into existing document without overwriting the existing data.
The structure for the documents is:
DocumentID: Sensor + date, ie: KitchenTemperature20180227
{
"topic": "Kitchen/Temp",
"type": "temperature",
"unit": "DegC"
"20180227130400": [
{
"data": "24"
}
],
..............
"20180227130500": [
{
"data": "25"
}
],
}
We are all new to couchbase and NoSql-databases, but eager to learn and understand how we the best way should implemet this.
We've tried upsert, insert and update commands, but they all overwrite the existing document or won't execute because the document already exists. As you can see, we have some top-level information, like topic, type, unit. The rest should be data coming in every minute and appended to the existing document.
Help on how to proceed would be very appriciated.
Best regards, Kenneth
In this case you can use the subdocument API. This allows you to modify portions of a document based on a "path". This image gives the idea for getting a subdocument.
You can mutate subdocuments as well. Look at the subdocument API documentation for Couchbase. There are also blog posts that go through examples in Java and Go on the Couchbase blog site.

How to fetch data from couchDB using couch api?

Instead keys and IDs alone, I want to get all the docs via couch api. I have tried with GET "http://localhost:5984/db-name/_all_docs" but it returned
{
"total_rows":4,
"offset":0,
"rows":[
{"id":"11","key":"11","value":{"rev":"1-a0206631250822b37640085c490a1b9f"}},
{"id":"18","key":"18","value":{"rev":"30-f0798ed72ceb3db86501c69ed4efa39b"}},
{"id":"3","key":"3","value":{"rev":"15-0dcb22bab2b640b4dc0b19e07c945f39"}},
{"id":"6","key":"6","value":{"rev":"4-d76008cc44109bd31dd32d26ba03125d"}}
]
}
From the documentation
for the below request it will send the data as we expected but it requires set of keys in request.
POST /db/_all_docs HTTP/1.1
{
"keys" : [
"11",
"18"
]
}
Thanks in advance.
The _all_docs endpoint is actually just a system-level view that uses the _id field as the index. Thus, any parameters that you can use for views also apply here.
If you read the documentation further, you'll find that adding the parameter include_docs=true to your view will include the original documents in the results. The documents will be added as the doc field alongside id, value and rev.

spring data mongo Language annotation

i'm trying to figure out how to use the #Language annotation from spring-data-rest-mongo project;
I would like to store and retrive mongo document and do query on them; the simple document is as follow:
{
id: "abc",
name: "light",
"description": "wave or particle"
}
I would like to store it and retrive it with different languages;
any hint about it?
some sample using spring-data-rest would be greatly appreciated
thanks a lot
The #Language annotation is used to set the language_override property for a full text index and therefore does not help designing a collection of mulitlingual documents.
For more information please see the MongoDB Text Indexes and the Spring Data MongoDB Full Text Search support.
To perform a search for a certain language I usually follow this approach: Multi-language documents example
The entity
#Document
#CompoundIndex(def = "{'language': 1, 'textProperty': 'text'}")
The repository
#Query("{'language': ?0, $text: {$search: ?1, $language: ?0}}")
Stream<TheDocuemnt> findTheDocument(String language, String criteria, Sort sort);

Solr document Submission

I am new in the solr technology.Can you please tell me how a document can submit to solr using user interface.Is it necessary to create xml of the document first?I expect a simplest way of document indexing..
Please Help.
The default Solr RequestHandler (from 4.0) supports four formats: XML, JSON, CSV and javabin. There's a page under the Admin interface to submit documents to the index (select the core and Documents).
There are examples of each of the formats available in the Solr reference guide. If you're using a client library, the library will usually handle this for you anyways, and use an appropriate format depending on which language it's written in and what built-in libraries are available.
The simplest format for manually adding documents is probably JSON:
[
{
"id": "1",
"title": "Doc 1"
},
{
"id": "2",
"title": "Doc 2"
}
]
You can also use the DataImportHandler to import data locally at the server, such as from an SQL-database. In that case you don't submit the actual rows to the server, but you tell the handler to fetch the rows and create documents for you.

Resources