I use OSM-data for the development of an population disaggregation algorithm. In order to improve the results I want to add missing attributes to some of the buildings with an OSM-editor. I am identifying these buildings in PostGIS.
As I want to map the features directly in OpenStreetMap, I am currently looking for an efficient procedure to have an OSM-file with the buildings (from PostGIS-table), which I can then load into JOSM for mapping.
I read some other posts discussing the export of PostGIS-tables with Osmosis. (Using osmosis to convert POSTGIS Table to .OSM)
I however have the doubt and fear that some OSM-information will be lost and accordingly when the updates are pushed to OSM the OSM-dataset will be harmed. Are there any best practice workflows or recommendations for accomplishing this tasks?
Related
I am currently working with java spring and postgres.
I have a query on a table, many filters can be applied to the query and each filter needs many joins.
This query is very slow, due to the number of joins that must be performed, also because there are many elements in the table.
Foreign keys and indexes are correctly created.
I know one approach could be to keep duplicate information to avoid doing the joins. By this I mean creating a new table called infoSearch and keeping it updated via triggers. At the time of the query, perform search operations on said table. This way I would do just one join.
But I have some doubts:
What is the best approach in postgres to save item list flat?
I know there is a json datatype, could I use this to hold the information needed for the search and use jsonPath? is this performant with lists?
I also greatly appreciate any advice on another approach that can be used to fix this.
Is there any software that can be used to make this more efficient?
I'm wondering if it wouldn't be more performant to move to another style of database, like graph based. At this point the only problem I have is with this specific table, the rest of the problem is simple queries that adapt very well to relational bases.
Is there any scaling stat based on ratios and number of items which base to choose from?
Denormalization is a tried and true way to speed up queries/reports/searching processes for relational databases. It uses a standard time vs space tradeoff to reduce the time of query, at the cost of duplicating the data and increasing write/insert time.
There are third party tools that are specifically designed for this use-case, including search tools (like ElasticSearch, Solr, etc) and other document-centric databases. Graph databases are probably not useful in this context. They are focused on traversing relationships, not broad searches.
Let's say I use Elasticsearch in my application for searching restaurants near me.
I get all sorted restaurants id from Elasticsearch. And using these ids, I get all data like name, location, popular menus of restaurant from RDB.
As you can guess, it takes some time to get data from RDB. If I store all data used by application in Elasticsearch, then I can make it faster.
But I'm wondering what is the recommended way to store data in Elasticsearch and what to consider for choosing it.
I think there are some ways like below,
To store data only used for search
To store all data for search and display
Thanks!
This is a very interesting but very common question and normally every application needs to decide this, I can provide some data points which would help you to take a informed decision.
Elasticsearch is a NRT search engine and there will always be some latency when you update ES from your RDB. so some of your items which are in RDB will not be in ES and thus will not be in your search results.
Considering above, why you want to make a call again to RDB, to fetch the latest info from your RDB, on your ES search result or some other reasons like avoid fetching/storing the large data from ES ?
With every field ES provides a way to store it or not using store param or using _source enabled by default, if both are not enabled, you can't fetch the actual value, then you have to go to RDB.
RDB call to fetch the values of fields put a penalty on performance, have you benchmark it versus fetching the values directly from ES.
Every search system has its own functional and non-functional requirement and based on above points, hope you got more information, which will help you take a better decision.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am working on an online tool that serves to a number of merchants(e.g. lets say retail merchants).This application takes data from different merchants and provides some data on their retail shop. The solution that I am trying to incorporate here is that any merchant can signup for the tool, send (may be upload through excel or my application can input a json object) their transaction and inventory data and in turn return the result to merchant.
My application consist of domain that is intrinsic to the application and contain all the datapoints that can be used by merchants, e.g
Product {
productId,
productName,
...
}
But the problem that I am facing is that, each merchant will have their own way of representing data, for e.g. merchant x may call product as prod or merchant y may call product
as proddt.
Now I would need to way to convert data represented in merchant format to a way that application understand, i.e each time there is a request from merchant x, application should map prod to product e.t.c e.t.c.
Firstly I was thinking of coding these mappers but then this is not a viable solution as I can't really code these mappings for 1000's of merchants that may join my application.
Another solution I was think was to enable the merchant to map a field from their domain to application domain through UI. And then save this somewhere in DB and on each request from merchant first find the mapping from db and then apply it over any incoming request.(Though I am still confused how this can be done).
Does anyone has faced similar design issue before and know of the better way of solving this problem.
if you can find the order of fields then you can easily map data send by your client and you can return result. for example in Excel you client can mention data in this format:
product | name | quantity | cost
condition: your ALL client should send data in this format.
then it will be easy for you to map these field and access then with correct DTO and later save and process data.
I appreciate this "language" concern, and -in fact- multi-lingual applications do it the way you describe. You need to standardize your terminology at your end, so that each term has only one meaning and only one word/term to describe it. You could even use mnemonics for that, e.g. for "favourite product" you use "Fav_Prod" in your app and in your DB. Then, when you present data to you customer, your app looks-up their preferred term for it in a look-up-table, and uses "favourite product" for customer one, and perhaps the admin, and then "favr prod" for customer two, etc...
Look at SQL and DB design, you'll find that this is a form of normalization.
Are you dealing with legacy systems and/or APIs at the customer end? If so, someone will indeed have to type in the data.
If you have 1000s of customers, but there are only 10..50 terms, it may best to let the customer, not you, set the terms.
You might be lucky, and be able to cluster customers together who use similar or close enough terminology. For new customers you could offer them a menu of terms that they can choose from.
If merchants were required to input their mapping with their data, your tool would not require a DB. In JSON, the input could be like the following:
input = {mapping: {ID: "productId", name: "productName"}, data: {productId: 0, productName: ""}}
Then, you could convert data represented in any merchant's format to your tool's format as follows:
ID = input.data[input.mapping.ID]
name = input.data[input.mapping.name]
To reacp:
You have an application
You want to load client data (merchants in this case) into your application
Your clients “own” and manage this data
There are N ways in which such client data can be managed, where N <= the number of possible clients
You will run out of money and your business will close before you can build support for all N models
Unless you are Microsoft, or Amazon, or Facebook, or otherwise have access to significant resources (time, people, money)
This may seem harsh, but it is pragmatic. You should assume NOTHING about how potential clients will be storing their data. Get anything wrong, your product will process and return bad results, and you will lose that client. Unless your clients are using the same data management tools—and possibly even then—their data structures and formats will differ, and could differ significantly.
Based on my not entirely limited experience, I see three possible ways to handle this.
1) Define your own way of modeling data. Require your customers to provide data in this format. Accept that this will limit your customer base.
2) Identify the most likely ways (models) your potential clients will be storing data (e.g. most common existing software systems they might be using for this.) Build import structures, formats to suppor these models. This, too, will limit your customer base.
3) Start with either of the above. Then, as part of your business model, agree to build out your system to support clients who sign up. If you already support their data model, great! If not, you will have to build it out. Maybe you can work the expense of this into what you charge them, maybe not. Your customer base will be limited by how efficiently you can add new and functional means of loading data to your system.
I am doing an to-B app which I have to meet the different needs of many customers.For example,such as people,one client has two tags, age and position,the other has a tag habits,but I only has a system.
I have tried many methods to solve this question:
Horizontal plus multiple spare fields
EAV model (entity attributes Value)
Stored in json structure
Because I hope that the code is set by the user, I will not use method 1.
As for method 2,I tested a million-level data and the query speed is slow, who can give me a better way to solve this? Or directions can be optimized to make method 2 and method 3 better? which database is good for my question? MySQL, MongoDB or other?
My question is that in a to B project, because each customer's needs are different, multiple custom fields are required. So, how will the database structure be designed and how to make the search
better and faster?
I am trying to do the following:
we are trying to design a fraud detection system for stock market.
I know the Specification for the frauds (they are like templates).
so I want to know if I can design a template, and find all records that match this template.
Notice:
I can't use the traditional queries cause the templates are complex
for example one of my Fraud is circular trading,it's like this :
A bought from B, and B bought from C, And C bought from A (it's a cycle)
and this cycle can include 4 or 5 persons.
is there any good suggestion for this situation.
I don't see why you can't use "traditional queries" as you've stated. SQL can be used to write extraordinarily complex queries. For that matter I'm not sure that this is a hugely challenging question.
Firstly, I'd look at the behavior you have described as vary transactional, therefore I treat the transactions as a model. I'd likely have a transactions table with some columns like buyer, seller, amount, etc...
You could alternatively have the shares as its own table and store say the previous 100 owners of that share in the same table using STI (Single Table Inheritance) buy putting all the primary keys of the owners into an "owners" column in your shares table like 234/823/12334/1234/... that way you can do complex queries and see if that share was owned by the same person or look for patterns in the string really easily and quickly.
-update-
I wouldn't suggest making up a "small language" I don't see why you'd want to do something like that when you have huge selection of wonderful languages and databases to choose from, all of which have well refined and tested methods to solve exactly what you are doing.
My best advice is pop open your IDE (thumbs up for TextMate) and pick your favorite language (Ruby in my case). Find some sample data and create your database and start writing some code! You can't go wrong trying to experiment like this, it'll will totally expose better ways to go about it than we can dream up here on Stackoverflow.
Definitely Data Mining. But as you point out, you've already got the models (your templates). Look up fraud DETECTION rather than prevention for better search results?
I know a some banks use SPSS PASW Modeler for fraud detection. This is very intuitive and you can see what you are doing as you play around with the data. So you can implement your templates. I agree with Joseph, you need to get playing, making some new data structures.
Maybe a timeseries model?
Theoretically you could develop a "Small Language" first, something with a simple syntax (that makes expressing the domain - in your case fraud patterns - easy) and from it generate one or more SQL queries.
As most solutions, this could be thought of as a slider: at one extreme there is the "full Fraud Detection Language" at the other, you could just build stored procedures for the most common cases, and write new stored procedures which use the more "basic" blocks you wrote before to implement the various patterns.
What you are trying to do falls under the Data Mining umbrella, so you could also try to learn more about it: maybe you can find a Data Mining package for your specific DB (you didn't specify) and see if it helps you finding common patterns in your data.