Related
In short, I am looking to create a POC (proof of concept) for an interface I have designed for aggregating data from several different sources. Currently I am just using flat tables in SQL with no relationships (although there are fields that are in the data for that purpose) to store the data, which is being gathered by various means (mostly PowerShell scripts). I also want it to be modular enough so I can add hooks to REST calls to other sources. So, if I make a call to get data about a certain asset, I don't have to know where that data is at that level. The middleware will know where to find it and use the appropriate method to get it (SQL, REST, etc.).
I have been in IT for over 31 years, I have experience with SQL (and have been a professional DBA at one point), a few scripting languages, and wrote a similar app in C# at one point, but I was given a template interface from a profession developer and I just ran with that and changed it as I went.
I want a high level view with colors indicating general health of a location, and the ability to search the data for assets (this could be anything from a VM, BM server, ethernet switches, FC switches, connected SAN, hypervisors, vCenters, Nutanix clusters, etc.) to find their location, health and most importantly, their connections to other assets.
I know a little about everything, enough to be dangerous in most and some a master of. What direction should I go here? I know BASIC, PowerShell well enough to call myself an expert, but more than likely, just a hacker. I know the basic concepts of coding well enough (I was an adjunct at a CC 20+ years ago) to teach it to others. I want to use a language I will be able to hand over to a professional to maintain once we have a new DevOps member on the team.
EDIT: I wrote a very small program earlier this year using React. That was broken into the interface and the 'middleware' with two separate project. Java is not an intuitive language for me and I could go this direction again, but I am looking for possible alternatives.
I'm not sure there's a 'non-developer' way of doing this unfortunately. If you really want to do this yourself, I'd suggest a C# API since you have some experience in that (even if it's minimal, it's still comfort), and luckily for you they just released minimal API's which should make your life much easier (https://dotnetcoretutorials.com/2021/07/16/building-minimal-apis-in-net-6/).
In terms of actually displaying the data, if it's a really simple UI I'd just keep it simple and use JavaScript and HTML/CSS (ie no frameworks such as React). It's a POC, and if someone comes in and wants to move over to using a framework because there's more work to be done on it, it's not that hard to take your existing code and move it into something more modern. But if it's small, a framework would be over engineering this, especially if you need to learn it all.
For smaller projects like this, you can kind of learn the code you need for the project, without needing to truly learn to code. Get your components from a library like Bootstrap (it's basically premade elements like your dropdowns, progress bars etc, their documentation is pretty good), and keep it as simple as possible. Good luck!
So I'm designing this blog engine and I'm trying to just keep my blog data without considering comments or membership system or any other type of multi-user data.
The blog itself is surrounded around 2 types of data, the first is the actual blog post entry which consists of: title, post body, meta data (mostly dates and statistics), so it's really simple and can be represented by simple json object. The second type of data is the blog admin configuration and personal information. Comment system and other will be implemented using disqus.
My main concern here is the ability of such engine to scale with spiked visits (I know you might argue this but lets take it for granted). So since I've started this project I'm moving well with the rest of my stack except the data layer. Now I've been having this dilemma choosing the database, I've considered MongoDB but some reviews and articles/benchmarking were suggesting slow reads after collections read certain size. Next I was looking at Redis and using its persistence features RDB and AOF, while Redis is good at both fast reading/writing I'm afraid of using it because I'm not familiar with it. And this whole search keeps going on to things like "PostgreSQL 9.4 is now faster than MongoDB for storing JSON documents" etc.
So is there any way I can settle this issue for good? considering that I only need to represent my data in key,value structure and only require fast reading but not writing and the ability to be fault tolerant.
Thank you
If I were you I would start small and not try to optimize for big data just yet. A lot of blogs you read about the downsides of a NoSQL solution are around large data sets - or people that are trying to do relational things with a database designed for de-normalized data.
My list of databases to consider:
Mongo. It has huge community support and based on recent funding - it's going to be around for a while. It runs very well on a single instance and a basic replica set. It's easy to set up and free, so it's worth spending a day or two running your own tests to settle the issue once and for all. Don't trust a blog.
Couchbase. Supports key/value storage and also has persistence to disk. http://www.couchbase.com/couchbase-server/features Also has had some recent funding so hopefully that means stability. =)
CouchDB/PouchDB. You can use PouchDB purely on the client side and it can connect to a server side CouchDB. CouchDB might not have the same momentum as Mongo or Couchbase, but it's an actively supported product and does key/value with persistence to disk.
Riak. http://basho.com/riak/. Another NoSQL that scales and is a key/value store.
You can install and run a proof-of-concept on all of the above products in a few hours. I would recommend this for the following reasons:
A given database might scale and hit your points, but be unpleasant to use. Consider picking a database that feels fun! Sort of akin to picking Ruby/Python over Java because the syntax is nicer.
Your use case and domain will be fairly unique. Worth testing various products to see what fits best.
Each database has quirks and you won't find those until you actually try one. One might have quirks that are passable, one will have quirks that are a show stopper.
The benefit of trying all of them is that they all support schemaless data, so if you write JSON, you can use all of them! No need to create objects in your code for each database.
If you abstract the database correctly in code, swapping out data stores won't be that painful. In other words, your code will be happier if you make it easy to swap out data stores.
This is only an option for really simple CMSes, but it sounds like that's what you're building.
If your blog is super-simple as you describe and your main concern is very high traffic then the best option might be to avoid a database entirely and have your CMS generate static files instead. By doing this, you eliminate all your database concerns completely.
It's not the best option if you're doing anything dynamic or complex, but in this small use case it might fit the bill.
Given that a lot of people use content repositories. There must be a good reason. I'm building out a new web application that will need to store content. Can someone help me understanding this?
What are the advantages to using a content repository like Apache Jackrabbit as opposed to writing your own code/API to store images or text pages? Writing your own requires time etc. but so too does implementing and learning a new framework like the content repository API. A benefit to rolling your own seems to me that you know your code and have immediate expertise if you need to enhance or fix it. Using another framework you need to learn its foibles, and it is always easier to modify code you know that don't know... i.e. you don't know that underlying framework code as well as your own.
As I said a lot of people use them. There must be a reason. I can't see it as being just another "everyone is using them so, so should we." At least I hope it isn't that. :)
A JCR repository allows you to store all your content (from structured database-type data to large multimedia files) in a single place and with a single API, which is extremely convenient and makes your code simpler, avoiding the impedance mismatch between files and data that you usually have in content-based systems.
JCR also provides a lot of infrastructure functionality that you won't have to build or assemble yourself: search (including full-text), observation (callbacks when something changes) versioning, data types including multi-value, ordered nodes, etc...
If you allow a shameless plug, my "JCR - best of both worlds" article at http://java.dzone.com/articles/java-content-repository-best describes this in more detail and also provides a reading list for the JCR spec, that should allow you go get a good overview without reading the whole thing.
The article uses Apache Sling for its examples, which combined with a JCR repository provides a very nice (IMO, but as a Sling committer I'm biased ;-) platform for content-based applications.
My most recent projects have involved both choices: a custom-built data store (MySQL and image files) wtih a multi-level caching mechanism, and a JCR-based commercial repository.
A few thoughts:
In the short run, a DIY solution offers reduced complexity: you only have to build and learn what you need. And there is at least the opportunity to optimize
the data store for your particular application's needs -- more than likely speed of retrieval, but possibly storage footprint, security, or reliability concerns are foremost for you.
However, in the long run, you're looking at a significant increment of work to extend the home-grown system to a new content type (video, e.g.) or to provide new functionality (maybe,
versioning).
Also, it's difficult to separate the choice of a data store approach from the choice of tools that content providers will use to populate and maintain the data store. You'll have to give
your authors something more than an HTML form with a textarea and a submit button.
This is related to the advantages of standardization: compatibility and interchangeability. If everybody writes his own library and API, there is no compatibility and interchangeability, leading to higher cost.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We're currently looking at using the Force.com platform as our development platform and the sales guys and the force.com website are full of reasons why it's the best platform in the world. What I'm looking for, though, is some real disadvantages to using such a platform.
Here are 10 to get you started.
Apex is a proprietary language. Other than the force.com Eclipse plugin, there's little to no tooling available such as refactoring, code analysis, etc.
Apex was modeled on Java 5, which is considered to be lagging behind other languages, and without tooling (see #1), can be quite cumbersome.
Deployment is still fairly manual with lots of gotchas and manual steps. This situation is slowly improving over time, but you'll be disappointed if you're used to having automated deployments.
Apex lacks packages/namespaces. All of your classes, interfaces, etc. live in one folder on the server. This makes code much less organized and class/interface names necessarily long to avoid name clashes and to provide context. This is one of my biggest complaints, and I would not freely choose to build on force.com for this reason alone.
The "force.com IDE", aka force.com eclipse plugin, is incredibly slow. Saving any file, whether it be a class file, text file, etc., usually takes at least 5 seconds and sometimes up to 30 seconds depending on how many objects, data types, class files, etc. are in your org. Saving is also a blocking action, requiring not only compilation, but a full sync of your local project with the server. Orders of magnitude slower than Java or .NET.
The online developer community does not seem very healthy. I've noticed lots of forum posts go unanswered or unsolved. I think this may have something to do with the forum software salesforce.com uses, which seems to suck pretty hard.
The data access DSL in Apex leaves a lot to be desired. It's not even remotely competitive with the likes of (N)Hibernate, JPA, etc.
Developing an app on Apex/VisualForce is an exercise in governor limits engineering. Easily half of programmer time is spent trying to optimize to avoid the numerous governor limits and other gotchas like visualforce view state limits. It could be argued that if you write efficient code to begin with you won't have this problem, which is true to an extent. However there are many times that you have valid reasons to make more than x queries in a session, or loop through more than x records, etc.
The save->compile->run cycle is extremely slow, esp. when it involves zipping and uploading the entire static resource bundle just to do something like test a minor CSS or javascript change.
In general, the pain of a young, fledgling platform without the benefits of it being open source. You have no way to validate and/or fix bugs in the platform. They say to post it to their IdeaExchange. Yeah, good luck with that.
Disclaimers/Disclosures: There are lots of benefits to a hosted platform such as force.com. Force.com does regularly enhance the platform. There are plenty of things about it I like. I make money building on force.com
I see you've gotten some answers, but I would like to reiterate how much time is wasted getting around the various governor limits on the platform. As much as I like the platform on certain levels, I would very strongly, highly, emphatically recommend against it as a general application development platform. It's great as a super configurable and extensible CRM application if that's what you want. While their marketing is exceptional at pushing the idea of Force.com as a general development platform, it's not even remotely close yet.
The efficiency of having a stable platform and avoiding big performance and stability problems is easily wasted in trying to code around the limits that people refer to. There are so many limits to the platform, it becomes completely maddening. These limits are not high-end limits you'll hit once you have a lot of users, you'll hit them almost right away.
While there are usually techniques to get around them, it's very hard to figure out strategies for avoiding them while you're also trying to develop the business logic of your actual application.
To give you a simple sense of how developer un-friendly the environment is, take the "lack of debugging environment" referred to above. It's worse than that. You can only see up to 20 of the most recent requests to the server in the debug logs. So, as you're developing inside the application you have to create a "New" debug request, select your name, hit "Save", switch back to your app, refresh the page, click back to your debug tab, try to find the request that will house your debug log, hit "find" to search for the text you're looking for. It's like ten clicks to look at a debug output. While it may seem trivial, it's just an example of how little care and consideration has been given to the developer's experience.
Everything about the development platform is a grafted-on afterthought. It's remarkable for what it is, but a total PITA for the most part. If you don't know exactly what you are doing (as in you're certified and have a very intimate understanding of Apex), it will easily take you upwards of 10-20x the amount of time that it would in another environment to do something that seems like it would be ridiculously simple, if you can even succeed at all.
The governor limits are indeed that bad. You have a combination of various limits (database queries, rows returned, "script statements", future calls, callouts, etc.) and you have to know exactly what you are doing to avoid these. For example, if you have a calculated rollup "formula" field on an object and you have a trigger on a child object, it will execute the parent object triggers and count those against your limits. Things like that aren't obvious until you've gone through the painful process of trying and failing.
You'll try one thing to avoid one limit, and hit another in a never ending game of "whack a limit". In the process you'll have to drastically re-architect your entire app and approach, as well as rewrite all of your test code. You must have 75% test code coverage to deploy into production, which is actually very good thing, but combined with all of the other limits, it's very burdensome. You'll actually hit governor limits writing your test code that wouldn't come up in normal user scenarios, but that will prevent you from achieving the coverage.
That is not to mention a whole host of other issues. Packaging isn't what you expect. You can't package up your app and deliver it to users without significant user intervention and configuration on the part of the administrator of the org. The AppExchange is a total joke, and they've even started charging 5K just to get your app listed. Importing with the data loader sucks, especially if you have any triggers. You can't export all of your data in one step that includes your relationships in such a way that it can easily be re-imported into another org in a single step (for example a dev org). You can only refresh a sandbox once a month from production, no exceptions, and you can't include your data in a refresh by default unless you have called your account executive to get that feature unlocked. You can't mass delete data in custom objects. You can't change your package names. Certain things can take numerous days to complete after you have requested them, such as a data backup before you want to deploy an app, with no progress report along the way and not much sense of when exactly the export occurred. Given that there are synchronicity issues of data if there are relationships between the data, there are serious data integrity issues in that there is no such thing as a "transaction" that can export numerous objects in a single step. There are probably some commercial tools to facilitate some of this, but these are not within reach to normal developers who may not have a huge budget.
Everything else the other people said here is true. It can take anywhere from five seconds to a minute sometimes to save a file.
I don't mean to be so negative because the platform is very cool in some ways and they're trying to do things in a multi-tenant environment that no one else is doing. It's a very innovative environment and powerful on some levels (I actually like VisualForce a lot), but give it another year or two. They're partnering with VMware, maybe that will lead to giving developers a bit more of a playpen rather than a jail cell to work in.
Here are a few things I can give you after spending a fair bit of time developing on the platform in the last fortnight or so:
There's no RESTful API. They have a soap based API that you can call, but there is no way of making true restful calls
There's no simple way to take their SObjects and convert them to JSON objects.
The visual force pages are ok until you want to customize them and then it's a whole world of pain.
Visual force pages need to be bound to SObjects otherwise there's no way to get the standard input fields like the datepicker or select list to work.
The eclipse plugin is ok if you want to work by yourself, but if you want to work in a large team with the eclipse plugin forget it. It doesn't handle synchronizing to and from the server, it crashes and it isn't really helpful at all.
THERE IS NO DEBUGGER! If you want to debug, it's literally debugged by system.debug statements. This is probably the biggest problem I've found
Their "MVC" model isn't really MVC. It's a lot closer to ASP.NET Webforms. Your views are tightly coupled to not only the models but the controllers as well.
Storing a large number of documents is not feasible. We need to store over 100gb's of documents and we were quoted some ridiculous figure. We've decided to implement our document storage on amazons S3 infrastructure
Even tho the language is java based, it's not java. You can't import any external packages or libraries. Also, the base libraries that are available are severely limited so we've found ourselves implementing a bunch of stuff externally and then exposing those bits as services that are called by force.com
You can call external SOAP or REST based services but the message body is limited to 100kb's so it's very restrictive in what you can call.
In all honesty, whilst there are potential benefits to developing on something like the force.com platform, for me, you couldn't use the force.com platform for true enterprise level apps. At best you could write some basic crud style applications but once you move into anything remotely complicated I'd be avoiding it like the plague.
Wow- there's a lot here that I didn't even know were limitations - after working on the platform for a few years.
But just to add some other things...
The reason you don't have a line-by-line debugger is precisely because it's a multi-tenant platform. At least that's what SFDC says - it seems like in this age of thread-rich programming, that isn't much of an excuse, but that's apparently the reason. If you have to write code, you have "System.debug(String)" as your debugger - I remember having more sophisticated server debugging tools in Java 1.2 about 12 years ago.
Another thing I really hate about the system is version control. The Spring framework is not used for what Spring is usually used for - it's really more off a configuration tool in SFDC rather than version control. SFDC provides ZERO version-control.
You can find yourself stuck for days doing something that should seem so ridiculously easy, like, say, scheduling a SFDC report to export to a CSV file and email to a list of recipients... Well, about the easiest way to do that is create a custom object with a custom field, with a workflow rule and a Visualforce email template... and then for code you need to write a Visualforce component that streams the report data to the Visualforce email template as an attachment and you write anonymous APEX code schedule field-update of the custom object... For SFDC developers, this is almost a daily task... trying to put about five different technologies together to do tasks that seem so simple.... And this can cause management headaches and tensions too - Typically, you'd find this out after getting a suggestion to do something that doesn't work in the user-community (like someone already said), and then trying many things that, after you developed them you'd find they just don't work for some odd-ball reason - like "you can't schedule a VisualForce page", or "you can't call getContent from a schedulable context" or some other arcane reason.
There are so many, many maddening little gotcha's on the SFDC platform, that once you know WHY they're there, it makes sense... but they're still very bad limitations that keep you from doing what you need to do. Here's some of mine;
You can't get record owner information "out of the box" on pretty much any kind of record - you have to write a trigger that links the owner on create of the record to the record you're inserting. Why? Short answer because an owner can be either a "person" or a "queue", and the two are drastically different entities... Makes sense, but it can turn a project literally upside down.
Maddening security model. Example: "Manage Public Reports" permission is vastly different from "Create and Customize Reports" and that basically goes for everything on the platform... especially folders of any kind.
As mentioned, support is basically non-existent. If you are an extremely self-sufficient individual, or have a lot of SFDC resources, or have a lot of time and/or a very forgiving manager, or are in charge of a SFDC system that's working fine, you're in pretty good shape. If you are not in any of these positions, you can find yourself in deep trouble.
SFDC is a very seductive business proposition... no equipment footprint, pretty good security, fixed price, no infrastructure, AND you get web-based CRM with batchable, and schedualble processing... But as the other posters said, it is really quite a ramp-up in development learning, and if you go with consulting, I think the lowest price I've seen was $200/hour.
Salesforce tends integrate with other things years after some technologies become common-place - JSON and jquery come to mind... and if you have other common infrastructures that you want to do an integration with, like JIRA, expect to pay a lot extra, and they can be quite buggy.
And as one of the other posters mentioned, you are constantly fighting governor limits that can just drive you nuts... an attachment can NOT be > 5MB. Period. And sometimes < 3MB (if base64 encoded). Ten HTTP callouts in a class. Period. There are dozens of published governor limits, and many that are not which you will undoubtedly find and just want to run out of your office screaming.
I really, REALLY like the platform, but trust me - it can be one really cruel mistress.
But in fairness to SFDC, I'd say this: the biggest problem I find with the platform is not the platform itself, but the gargantuan expectations that almost anyone who sees the platform, but hasn't developed on it has.... and those people tend to be in positions of great authority in business organizations; marketing, sales, management, etc. Huge disconnects occur and heads roll, or are threatened to roll daily - all because there's this great platform out there with weird gotchas and thousands of people struggling daily to get their heads around why things should just work when they just don't and won't.
EDIT:
Just to add to lomaxx's comments about the MVC; In SFDC terminology, this is closely related to what's known as the "viewstate" -- aand it can be really buggy, in that what is on the VF page is not what is in the controller-class for the page. So, you have to go throught weird gyrations to synch whats on the page with what the controller is going to write to SF when you click your "save" button (or make your HTTP callout or whatever).... man, it's annoying.
I think other people have covered the disadvantages in more depth but to me, it doesn't seem to use the MVC paradigm or support much in the way of code reuse at all. To do anything beyond simple applications is an exercise in frustration compared to developing an application using something like ASP.Net MVC.
Furthermore, the tools, the data layer and the frustration of trying to refactor code or rename fields during the development process doesn't help.
I think as a CMS it's pretty cool but as a platform for non CMS applications, it's doesn't make sense to me.
The security model is also very very restrictive... but this isn't the worst part. You can't currently assert whether a user has the ability to perform a particular action.
You can check to see what their role is, but you can't check if that role has permissions to perform the current action.
Even worse is the response from tech support to "try the action and if there's an exception, catch it"
Considering Force.com is a "cloud" platform, its ability to act as a client to an external WSDL-defined service is pretty underwhelming. See http://force201.wordpress.com/2010/05/20/when-generate-from-wsdl-fails-hand-coding-web-service-calls/ for what you might end up having to do.
To all above, I am curious how the release of VMforce, allowing Java programmer to write code for Force.com, changes the disadvantages above?
http://www.zdnet.com/blog/saas/vmforcecom-redefines-the-paas-landscape/1071
I guess they are trying to address these issues. At dreamforce they mentioned they we're trying to drop the Governor limits to only 4. I'm not sure what the details are. They have a REST API for early access, and they bought heroku which is a ruby development in the cloud. They split out the database, with database.com so you can do all your web development on and your db calls using database.com.
I guess they are trying to make it as agnostic as possible. But right about now these are all announcements and early access so like their Safe Harbor statements don't purchase on what they say, only on what they currently have.
Screen scraping seems like a useful tool - you can go onto someone else's site and steal their data - how wonderful!
But I'm having a hard time with how useful this could be.
Most application data is pretty specific to that application even on the web. For example, let's say I scrape all of the questions and answers off of StackOverflow or all of the results off of Google (assuming this were possible) - I'm left with data that is not very useful unless I either have a competing question and answer site (in which case the stolen data will be immediately obvious) or a competing search engine (in which case, unless I have an algorithm of my own, my data is going to be stale pretty quickly).
So my question is, under what circumstances could the data from one app be useful to some external app? I'm looking for a practical example to illustrate the point.
It's useful when a site publicly provides data that is (still) not available as an XML service. I had a client who used scraping to pull flight tracking data into one of his company's intranet applications.
The technique is also used for research. I had a client who wanted to compare the contents of several online dictionaries by part of speech, and all of these sites had to be scraped.
It is not a technique for "stealing" data. All ordinary usage restrictions apply. Many sites implement CAPTCHA mechanisms to prevent scraping, and it is inappropriate to work around these.
A good example is StackOverflow - no need to scrape data as they've released it under a CC license. Already the community is crunching statistics and creating interesting graphs.
There's a whole bunch of popular mashup examples on ProgrammableWeb. You can even meet up with fellow mashupers (O_o) at events like BarCamps and Hack Days (take a sleeping bag). Have a look at the wealth of information available from Yahoo APIs (particularly Pipes) and see what developers are doing with it.
Don't steal and republish, build something even better with the data - new ways of understanding, searching or exploring it. Always cite your data sources and thank those who helped you. Use it to learn a new language or understand data or help promote the semantic web. Remember it's for fun not profit!
Hope that helps :)
If the site has data that would benefit from being accessible through an API (and it would be free and legal to do so), but they just haven't implemented one yet, screen scraping is a way of essentially creating that functionality for yourself.
Practical example -- screen scraping would allow you to create some sort of mashup that combines information from the entire SO family of sites, since there's currently no API.
Well, to collect data from a mainframe. That's one reason why some people use screen scraping. Mainframes are still in use in the financial world and often it's running software that has been written in the previous century. The people who wrote it might already be retired and since this software is very critical for these organizations, they really hate it when some new code needs to be added. So, screenscraping offers an easy interface to communicate with the mainframe to collect information from the mainframe and then send it onwards to any process that needs this information.
Rewrite the mainframe application, you say? Well, software on mainframes can be very old. I've seen software on mainframes that was over 30 years old, written in COBOL. Often, those applications work just fine and companies don't want to risk rewriting parts because it might break some code that had been working for over 30 years! Don't fix things if they're not broken, please. Of course, additional code could be written but it takes a long time for mainframe code to be used in a production environment. And experienced mainframe developers are hard to find.
I myself had to use screen scraping too in a software project. This was a scheduling application which had to capture the output to the console of every child process it started. It's the simplest form of screen scraping, actually, and many people don't even realize that if you redirect the output of one application to the input of another, that it's still a kind of screen scraping. :)
Basically, screen scraping allows you to connect one (web) application with another one. It's often a quick solution, used when other solutions would cost too much time. Everyone hates it, but the amount of time it saves still makes it very efficient.
Let's say you wanted to get scores from a popular sports site that did not offer the information available with an XML feed or API.
For one project we found a (cheap) commercial vendor that offered translation services for a specific file format. The vendor didn't offer an API (it was, after all, a cheap vendor) and instead had a web form to upload and download from.
With hundreds of files a day the only way to do this was to use WWW::Mechanize in Perl, screen scrape the way through the login and upload boxes, submit the file, and save the returned file. It's ugly and definitely fragile (if the vendor changes the site in the least it could break the app) but it works. It's been working now for over a year.
One example from my experience.
I needed a list of major cities throughout the world with their latitude and longitude for an iPhone app I was building. The app would use that data along with the geolocation feature on the iPhone to show which major city each user of the app was closest to (so as not to show exact location), and plot them on a 3D globe of the earth.
I couldn't find an appropriate list in XML/Excel/CSV type format anywhere easily, but I did find this wikipedia page with (roughly) the info I needed. So I wrote up a quick script to scrape that page and load the data into a database.
Any time you need a computer to read the data on a website. Screen scraping is useful in exactly the same instances that any website API is useful. Some websites, however, don't have the resources to create an API themselves; screen scraping is the developer's way around that.
For instance, in the earlier days of Stack Overflow, someone built a tool to track changes to your reputation over time, before Stack Overflow itself provided that feature. The only way to do that, since Stack Overflow has no API, was to screen scrape.
The obvious case is when a webservice doesn't offer reverse search. You can implement that reverse search over the same data set, but it requires scraping the entire dataset.
This may be fair use if the reverse search also requires significant pre-processing, e.g. because you need to support partial matching. The data source may not have the technical skills or computing resources to provide the reverse search option.
I use screen scraping daily, I run some eCommerce sites and have screen-scraping scripts running daily to gather product lists automatically from my suppliers wholesale sites. This allows me to have upto date information on all the products available to me from several suppliers and allows me to flag non-economical margins due to price changes.