How to render custom street maps [closed] - maps

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
This is sort of a two part question.
First off, how can I render my own maps? Google, Bing, etc. seem to provide their own renderings that are effectively images, or so I understand. However, my objective is to be able to just get the data for streets and create my own representation from that data.
The second part is where and how can I get this information? Everywhere I look I find information on how to embed a map from Google, Bing, or whomever else.
If there is a resource that answers one, the other, or both parts, that would be awesome.

Two quick answers: OpenStreetMap and Mapnik.
OpenStreetMap is the only significant worldwide source of openly licensed street data. Some countries have their own sources - for example, in the US you can use the US Census Bureau's TIGER data (which is of very variable quality), or in the UK you can use Ordnance Survey OpenData - and if your needs are restricted to one country, that might be fine. Even so, OSM's community-created data tends to be much richer than that of national mapping agencies, though in some areas it might lack completeness.
OSM's full-planet data is a vast file and you'll need fairly serious hardware to process it, but country and regional extracts are available (for free) from third-party providers such as Geofabrik.
Mapnik is the standard rendering software, capable of 2D rendering the equal of major webmapping sites (Google etc.). It's often used as part of a package called TileMill which provides a CSS-like styling language.
You can find out more about the full toolchain, and the basics of OSM rendering, at the switch2osm.org tutorial site.

Have you checked out OpenStreetMap? They're mostly a repository for open-source map data (street layouts etc.), but the project links to renderers for their mapping data which might give you a start towards creating your own.

Related

Flow chart how to determine which low code platform to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Low code is getting somehow much attention recently, and I am looking for hard, fact-based decision criteria for which platform to choose for which purpose or industry.
I found a flow chart at stackify compiled by Ben Putano in 2017 which is a step in the right direction:
The chart references only Appian, outsystems, kony, mendix, agile point, caspio, salesforce, PowerBI, but does not talk about platforms like. LabView, UiPath, Pega, Camunda, Blue Prism.
I would appreciate some theoretical, scientific input on the whole story of low-code and how to classify the different platforms.
We've built no-code platforms classification based on these 4 questions:
What are the skills your team is good at? (Sales, design, product management, programming, etc).
What is your app front-end? (Responsive web app, native mobile app, static website, API, chatbot, etc).
Type of app you want to build? (Dashboard, directory, marketplace, communication app, community, social network, CRM/ERP, etc).
Do you plan to accept payments? (Yes/No).
We use this classification in our no-code tools advisor service: https://www.nocodesetup.com
Based on its performance, we see that ~80% of people finish such survey to get a personalized recommendation about the right no-code tools to build their app.
To choose the right low-code/no-code platform, other questions might be very useful too. But the more questions you add, the more chances people will get stuck answering them.
Another great example of no-code platforms classification by Aron Korenblit (Head of education at airtable.com): http://read.aatt.io/issues/no-code-is-not-a-monolith-207566
Hope this will be useful.
It's much more complex than this, I believe.
There are so many platforms equally capable of getting you the results you need. A thorough evaluation is still needed. At the low-code company I work at, we felt this as a common pain point for lots of prospects - evaluating low-code vendors, which made us create this scorecard where you can assess ANY low-code/no-code vendor to find the best fit for your use case.
You can use the tool to find which platform is the right fit!

Better approach to filtering Wikipedia edits [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When you are watching for news of particular Wikipedia article via its RSS channel,
its annoying without filtering the information, because most of the edits is spam,
vandalism, minor edits etc.
My approach is to create filters. I decided to remove all edits that don't contain a nickname of the contributor but are identified only by the IP address of the contributor, because most of such edits is spam (though there are some good contributions). This was easy to do with regular expressions.
I also removed edits that contained vulgarisms and other typical spam keywords.
Do you know some better approach utilizing algorithms or heuristics with regular expressions, AI, text-processing techniques etc.? The approach should be able to detect bad posts (minor edits or vandalisms) and should be able to incrementally learn what is good/bad contribution and update its database.
thank you
There are many different approaches you can take here, but traditionally spam filters with incremental learning have been implemented using Naive bayesian classifiers. Personally, I prefer the even easier to implement Winnow2 algorithm (details can be found in this paper).
First you need to extract features from the text you want to classify. Unfortunately the Wikipedia RSS feeds don't seem to be particularly machine readable, so you probably need to do some preprocessing. Alternatively you could directly use the Mediawiki API or see if one of the bot frameworks linked at the bottom of this page is of help to you.
Ideally you would end up with a list of words that were added, words that were removed, various statistics you can compute from that, and the metadata of the edit. I imagine the list of features would look something like this:
editComment: wordA (wordA appears in edit comment)
-wordB (wordB removed from article)
+wordC (wordC added to article)
numWordsAdded: 17
numWordsRemoved: 22
editIsMinor: Yes
editByAnIP: No
editorUsername: Foo
etc.
Anything you think might be helpful in distinguishing good from bad edits.
Once you have extracted your features, it is fairly simple to use them to train the Winnow/Bayesian classifier.

How much should an application know about its database? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm building a message service for an application. Users are identified by an email address and uid. Uid is also used as the primary key for the user table. I find it faster and simpler to allow my application to see and use that UID than to work with the user's email address.
Does it matter that my application knows something about the database's design ? This is a specific example, but I mean the question as a generalization... how much knowledge is 'too much' when it comes to information sharing between an application and its database ?
I'm asking mostly from the perspective of what would be considered 'good design'. I'm not quite sure how to tag this, suggestions appreciated.
Your application and service layer should abstract your data into a "domain" object used throughout your application(s). Only the data layer, which handles data retrieval and storage, should know the full database design; and it does need to know this information to properly query and store data.
Follow a standard layered approach to your application development - there are many books written on layered architecture.
I think it depends on whether you plan to have this application scale beyond the use of this specific database. I think there comes a point where over-generalization makes the code more complex than it needs to be (and likely less efficient). You need to find a balance, and that will likely depend on the planned future of the application. Obviously, my answer is completely subjective.
It's all depend on how you find your Application needs.
One of good design is Seperate your codes onto layers to make your codes reusable.
Ntier : UI <-- BusinessLogiclayer <--DataAccessLayer.
MVC : Model <--- View <--- Controller.
Something like you can call your domain/DataAccesslayer anytime in your ui because it is just in one class.
Make Sure that your Database is also Normalize/De Normalize and be familiar with it so you should see what is the best approach you needed in your application.

DotNetNuke Pros and cons for community blogging site [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm evaluating DotNetNuke for a project in which an offshore team is going to be doing the development. In short, the application will be a community blogging platform with many similarities to stackoverflow except no questions, just posts. Posts may include an image or video, tags, use info, title, body, community vote (up or down) comments, hotness, and a few other details. They should be taggable, sortable, categorizeable (beyond what a single set of tags provide) In the future the site will carry forums, a calendar, and a couple of other features for which there are modules available for DotNetNuke. Additionally, this site will incorporate a user experience that will include a lot of custom skinning.
Thoughts?
Using a web application framework (such as DotNetNuke) has a ton of benefits to help you get up and running faster and do less work when creating custom functionality.
However, you have to realize that you're basically incorporating tons of code into your project that you may not be familiar with. No matter how good the code is and how easy the framework is to learn, there's still going to be a significant learning curve for you and your team.
Your decision making process (if you're still deciding whether or not to use DotNetNuke), should include (in addition to reading, talking and other general investigation):
Downloading the application from Codeplex and checking out the source.
Investigating the third party modules that are out there.
Downloading a free module or two that comes with source, and try to reverse engineer the creator's development process. How did she integrate with the framework, what features did she take advantage of, what was written from scratch?
One place where DotNetNuke (or any other framework with tons of extensions) available can really shine is taking existing extensions that are available and customize them. If you need to implement a given feature, check out the solutions in the third party extension community first. You can probably find one that gets you a good percentage of the way there and use it as a foundation for your feature.
For example, if you want a photo gallery on your site, you probably don't want to write it from scratch. There are three major photo galleries out there that sell the source code. The core gallery module is free, simple gallery is cheap, and the source for ultra media gallery is available for a reasonable amount compared to writing it yourself. Any of these could give you a good head start in implementing your features.

Recommended Globalization References [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a web application that is globalized. The development process is agile style, with several sprints already completed. Our globalization framework is good and localization efforts have been successful so far. However, we continue to run into questions during requirements development, particularly in data storage and validation requirements. I'm certain the questions we are wrestling with have been researched and solved many times and the answers are likely well known and documented somewhere. So far, I have been unable to find the compendium of information I'm looking for.
Here are some sample questions I'd like to find answers for:
What are the best practices for input, validation, storage and display of address information for a global application?
number of characters to store for address fields (Did you know there is a city name that contains 163 characters?)
validation of address data
What are the best practices for input, validation, storage and display of phone numbers for a global application?
Same question for a person's name?
So far, our approach to these issues has been to allow ample storage for the various fields and to perform minimal input validation, relying on the user to get it right. This approach is working OK at this stage in the project, but the various project stakeholders are not satisfied using this approach for the long term. There is a strong desire for clean data, efficient storage and attractive data presentation for all locales.
Any recommendations out there for books, papers or websites that have a fairly complete handling of these and related topics?
Lots of good information here.

Resources