What's the best way to make a mobile friendly site? - mobile

Speaking entirely in technology-free terms, what is the best way to make a mobile friendly site? That is, I want to make a site that will work on a regular computer but also have mobile versions of the pages. Should I rewrite each page? The pages will probably have different functionality, so should I rewrite the backend code? Should it be an effectively different site with the same database?

On my site, I detect user agent, and for known mobile browsers I serve a different stylesheet, with some larger/less necessary items left off some pages. The backend doesn't really change.

I added a mobile presentation layer to an operational site about a year ago. Based on the architecture of the site (hopefully this isn't too technology dependent for you) I added a new set of JSPs to accommodate mobile browsers (sidenote: see http://wurfl.sourceforge.net/ for a great way to build mobile pages independent of browser type). Additionally some of the back-end functionality was changed due to the limited functionality of most mobile browsers. So, in short, the integration wasn't as painful as one would expect.
Good luck!

This is a pretty broad question, but here goes:
If the site is primarily about the content, meaning it's not so much a service you use as it's a publication you read, then I'd try to avoid publishing two sites wherever possible. Concentrate on simple presentation using mature technologies that mobile browsers can handle fairly well.
If it's essentially a software application delivered via the network, then things get trickier, because you're going to want to consider the UI of the mobile device, and how it differs from the desktop.
This should go without saying, but either way, if you have many mobile users, you should keep that in mind when you author content for the site. Formats, length, voice, etc.

In addition to the WURFL / WALL capabilities system that todd mentioned, there are Java Server Faces libraries available that use alternate WML renderkits for mobile phones.

One way I have done it in the past was to make sure my data was abstracted well in the data tier and then use separate middle tier models to pull what was appropriate. In my case the application was a weather application and the display methods of the target devices were really limited so we opted to only show the user the essentials on the mobile devices while the website was full featured. That was probably 10 years ago when WAP was big. But these days with devices getting bigger screens, better bandwidth, you may want to consume and display the exact same data with a different view model.
I never really know what type of application will need to consume the data in the future. We do a lot of apps across platforms but the domain model rarely changes. So I end up using the same middle tier objects where I can and pulling that data in different clients. A good example of this is a recent project where we had a rich internet application (widget), a full website, and a web service consuming the same data. Data abstraction in the middle-tier really shines in this environment.

On a very high level of abstraction, there are two main caveats with mobile devices: (1) their screen is small, (2) their network connection is intermittent. This basically means that your need to present the content so that it looks fine even on a small (variable size) screen, and preferably make it cacheable too so that your users can browse the content while offline. Then there's also the problem of low bandwidth and high latency, but those are slightly less important nowadays.

This is a very thorough overview of how to make a site mobile, though i hope its fair to say that there will always be different requirements for anyone seeking to go mobile. If you have a Blog, then you could just as easily make it mobile friendly using Mippin Mobilizer; its free, provides branding customisation tools, and with a big audience already browsing a wide mix of mobilized content, there's opportunities to generate advertising revenue around your blog.
This is because the Mippin Mobilized blog then becomes part of a much wider community of content, people, news, blogs, listings, all connecting around content, and much more at the mobile site:
http://mippin.com (on a mobile browser.)
Take a look at the Mobilizing tool because it shows off what the site can do in a second:
www.mippin.com/mobilizer
Only if you have a blog of course...

Related

Should I create two seperate frontends - desktop and mobile?

We're facing a new react project, which should run on desktop and mobile. Some of the desktop features won't be available on mobile.
We're debating whether we should create two separate front-ends or a single, responsive front-end.
Could you elaborate on the pros and cons of each approach? Which one would you use instead of me?
Should I create two separate frontends - desktop and mobile?
No.
For a well-written web-app, a single, manageable codebase ought to suffice for all hardware:
Desktops
Laptops
Tablets
Touchphones
Build a single, responsive front-end. Where features on larger screens and smaller screens don't match exactly, use a combination of:
graceful degradation
progressive enhancement
This is simply best-practice, efficient, future-proofed project management, using DRY (Don't Repeat Yourself) as a philosophy to work by. [1]
Rather than WET (Write Everything Twice).
IMHO one can reach 90%-99% with a single site / PWA depending on the application. To reach 100% a company has to invest heavily. So, I think for many companies this boils down to:
Do we have the resources?
Do we want to pay the price (initial invest)?
Do we want to pay the price (maintanance, new features, etc)
My recommendation is to start simply with a web app. Make a good separation of concerns at the backend (split api/business logic from presentation layer etc). If you are successful and want to reach the 100%, invest in further options.
There are many examples (news sites for instance), which abandon the approach of having two separate implementations.

HTML5 Offline Data Storage Options

A developer buddy of mine just recently started at a new company and one of his tasks is to come up with a new web application that allows their users to work offline when their staff are onsite in remote locations and then later to sync with a server side database of which they have not yet determined a server side provider yet (i'm guessing it will be SQL Server).
I've done some looking around and it seems like two viable options are LocalStorage and IndexedDb, with LocalStorage being the more popular technology? They want to go HTML5 and that's all fine and dandy, but long story short my question(s) is/are...
What are their offline data storage options in HTML5?
Are there older solutions that have better support?
are those methods less risky?
less time to implement?
The concept of storing data offline in a web application doesn't seem like a new idea, but doing so in today's newest technologies (html5, asp.net 4.0/4.5) is where you start getting into sparsely explored territory.
what are some things that maybe your company does that works, and what doesn't work?
Any intelligent evidence based replies will more than likely get my upvote so don't just rush to get the first answer and try to score some quick points, I'm looking for some good solid feedback here. Thank you.
There are several possibilities of offline storage in HTML5:
Web Storage
Web SQL Database
IndexedDB
Filesystem API
You have a very good presentation of all these options here:
http://www.html5rocks.com/en/tutorials/offline/whats-offline/#toc-older-storage
Compared to older options (Cookies, Plugin Based Storage, Browser-specific features) , I'll quote from the article:
"The newer storage APIs, what we might call "HTML5 storage", are generally superior in terms of openness and standards compliance. Of course, not all browsers include all of the new APIs, and you may have to support older browsers that don't include any of them at all. So the older techniques are still useful for graceful degradation."
Other useful links:
http://php-html.net/tutorials/html5-local-storage-guide/
http://www.tutorialspoint.com/html5/html5_web_sql.htm
Hope this helps...
I think your best choice is using Local Storage since its the most widely implemented standard of the storage options available under HTML5.
IndexedDB has been over-engineered (in my opinion) and is not widely supported (yet) for all of Mozilla's insistence that its better than sliced bread, and WebSQL's future is looking a bit uncertain as Mozilla is refusing to implement it and it is no longer being worked on due to an impasse between W3C and the browser vendors who have actually implemented it (Chrome/Safari/Opera).
There is currently a bit of a browser explosion happening thanks to smartphone adoption, so its hard to determine how much of the market supports HTML5 LocalStorage features but using statcounter I've been able to calculate that between Chrome (4+), Firefox (3.+), Safari (4+), Opera (10.5+) and IE (8+), including iPhone and Android devices you'll have captured around 80-85% of the market, with this figure going upwards a rate of 1-2% per month. The remainder being IE 6/7 (which stubbornly tends to hang on), older versions of new browsers (with update features that generally kept them current), and some mobile browsers stuck back in the stone age.
As for older options, I would add User Data Persistence for IE6/7 to the list provided by #user998692 but either way you're going to be caught up with lots of browser incompatiblity issues and support for a hodge-podge of technologies, which will complicate your codebase and testing and increase your delivery timelines (and cost) accordingly. If you do want to go down this road, however, I would recommend you take a look at PersistJS as the guys who put it together have done much of the work you would need to already.
These days its worth more looking forward (to what the market will look like in 1-2 years time when your app is propagating and gaining a wide user base) than backwards, so I'd say HTML5 LocalStorage is probably your best option.

Content Management System targeting Mobile Devices

If I want to implement CMS for Mobile Devices, what kind of points should take into account?
For example, make page size smaller, use optimized (small) pictures. Any other ideas?
Also what kind of rules can be applied while converting web-pages that WERE designed for Desktop Browsers, to the ones that are easily displayed in Mobile Browsers.
I know that Mobile Devices widely vary in there capacity and property, but still trying list out some rules.
Also any other ideas, suggestions, questions and advices are welcome on this topic.
Thanks for your opinions and answers.
Short foreword, all the things I'm listing below are something the main product of the company I work for already does or has worked out a solution for, the whole goal of this answer is to give you pointers.
Identifying the phone
When dealing with mobile as a web context, it's absolutely imperative you identify the phone correctly. That should be the highest priority. Here's couple of issues with identifying phones and their features:
Do not use userAgent.contains("iPhone") detection scheme. There's already loads of web bots and other applications which contain iPhone in their user agent string and thus you'd identify them incorrectly.
Not all phones even send User-Agent headers. However some of those send UAProf URL:s which contain all the phone's features in RDF format. Unfortunately this introduces the next two problems:
Obviously you won't have access to every single device data out there and you're bound to use public data repositories such as WURFL. These databases are however incomplete, slightly lagging behind or don't contain data you'd like to have. They are your best bet for initial data set though.
UAProfs lie. Yes, they contain false information - lots of it! Partly this is because the manufacturers forget to update the XML:s and partly because the UAProf files are written during the development of the phone and as we know, features do change during development.
When relying on a feature, make sure you're not relying on a specific version of a specific phone. For example BlackBerry has a feature called Tile which is basically a really fancy bookmark but you can't just serve it to all the BlackBerry phones, you have to identify the operating system version of the actual phone to serve the right variation of the Tile. Same goes for touch screen, iPhone wasn't the first one with touch screen and most certainly isn't the only one either - also don't expect a situation where the device has only one form of input, for example Nokia N900 has a touch screen, physical keyboard and even stylus.
Creating the actual pages
Thankfully this is something people have agreed upon and when creating the pages, you're supposed to use XHTML-MP. But oh how one would wish things were this easy...
All phones have differing level of XHTML-MP/CSS support. As an example, if I remember correctly, some older BlackBerries don't support background-color for block elements. Or header tags. We've also seen incorrect ordering of span elements when there's several in a row. Oh and for some reason tables are really hard. Basically, you have to go low on markup/styling tricks.
You can't test the existence of the feature by using the feature itself. If you want to detect JavaScript support, you could think that adding a bit of JavaScript to the page for that purpose alone would work, right? Nope, that crashes a significant percent of mobile phones visiting your site. Sure, new phones don't crash but not everyone has bought their phones in the last 12 months. Also mobile specific JavaScript API:s differ per manufacturer, as yet another example there's currently at least three different API:s for JavaScript-based geolocation detection, none of them interoperable with the other ones.
Add all these on top of normal CMS features (security, content management and transformation, caching, modularity, visitor tracking and whatnot) and you should have some sort of picture of how everything affects everything and how you really should consider the cost of making your own.
In fact even though this is sort of against the general spirit of SO, I'd strongly suggest for you to get a readily made solution such as ours and use that instead for your site building needs. After all, our product has seven years worth of specialized development under its hood.
A couple that we used ...
A cms targeted for mobile devices should be able to detect the device type and detect (or have a database of) screen resolutions so that content, particularly images, can be scaled appropriately.
The rendering engine should also be able to determine if the device can handle HTML or WAP and switch markup languages appropriately.
Paging capability on the output as opposed to rendering very large pages (if content mages are large) is also helpful.
Clean integration with the corresponding web site CMS (so content doesn't need to be dual produced) is also helpful if tere is, in fact, a corresponding large form web site.

Looking for an example of when screen scraping might be worthwhile

Screen scraping seems like a useful tool - you can go onto someone else's site and steal their data - how wonderful!
But I'm having a hard time with how useful this could be.
Most application data is pretty specific to that application even on the web. For example, let's say I scrape all of the questions and answers off of StackOverflow or all of the results off of Google (assuming this were possible) - I'm left with data that is not very useful unless I either have a competing question and answer site (in which case the stolen data will be immediately obvious) or a competing search engine (in which case, unless I have an algorithm of my own, my data is going to be stale pretty quickly).
So my question is, under what circumstances could the data from one app be useful to some external app? I'm looking for a practical example to illustrate the point.
It's useful when a site publicly provides data that is (still) not available as an XML service. I had a client who used scraping to pull flight tracking data into one of his company's intranet applications.
The technique is also used for research. I had a client who wanted to compare the contents of several online dictionaries by part of speech, and all of these sites had to be scraped.
It is not a technique for "stealing" data. All ordinary usage restrictions apply. Many sites implement CAPTCHA mechanisms to prevent scraping, and it is inappropriate to work around these.
A good example is StackOverflow - no need to scrape data as they've released it under a CC license. Already the community is crunching statistics and creating interesting graphs.
There's a whole bunch of popular mashup examples on ProgrammableWeb. You can even meet up with fellow mashupers (O_o) at events like BarCamps and Hack Days (take a sleeping bag). Have a look at the wealth of information available from Yahoo APIs (particularly Pipes) and see what developers are doing with it.
Don't steal and republish, build something even better with the data - new ways of understanding, searching or exploring it. Always cite your data sources and thank those who helped you. Use it to learn a new language or understand data or help promote the semantic web. Remember it's for fun not profit!
Hope that helps :)
If the site has data that would benefit from being accessible through an API (and it would be free and legal to do so), but they just haven't implemented one yet, screen scraping is a way of essentially creating that functionality for yourself.
Practical example -- screen scraping would allow you to create some sort of mashup that combines information from the entire SO family of sites, since there's currently no API.
Well, to collect data from a mainframe. That's one reason why some people use screen scraping. Mainframes are still in use in the financial world and often it's running software that has been written in the previous century. The people who wrote it might already be retired and since this software is very critical for these organizations, they really hate it when some new code needs to be added. So, screenscraping offers an easy interface to communicate with the mainframe to collect information from the mainframe and then send it onwards to any process that needs this information.
Rewrite the mainframe application, you say? Well, software on mainframes can be very old. I've seen software on mainframes that was over 30 years old, written in COBOL. Often, those applications work just fine and companies don't want to risk rewriting parts because it might break some code that had been working for over 30 years! Don't fix things if they're not broken, please. Of course, additional code could be written but it takes a long time for mainframe code to be used in a production environment. And experienced mainframe developers are hard to find.
I myself had to use screen scraping too in a software project. This was a scheduling application which had to capture the output to the console of every child process it started. It's the simplest form of screen scraping, actually, and many people don't even realize that if you redirect the output of one application to the input of another, that it's still a kind of screen scraping. :)
Basically, screen scraping allows you to connect one (web) application with another one. It's often a quick solution, used when other solutions would cost too much time. Everyone hates it, but the amount of time it saves still makes it very efficient.
Let's say you wanted to get scores from a popular sports site that did not offer the information available with an XML feed or API.
For one project we found a (cheap) commercial vendor that offered translation services for a specific file format. The vendor didn't offer an API (it was, after all, a cheap vendor) and instead had a web form to upload and download from.
With hundreds of files a day the only way to do this was to use WWW::Mechanize in Perl, screen scrape the way through the login and upload boxes, submit the file, and save the returned file. It's ugly and definitely fragile (if the vendor changes the site in the least it could break the app) but it works. It's been working now for over a year.
One example from my experience.
I needed a list of major cities throughout the world with their latitude and longitude for an iPhone app I was building. The app would use that data along with the geolocation feature on the iPhone to show which major city each user of the app was closest to (so as not to show exact location), and plot them on a 3D globe of the earth.
I couldn't find an appropriate list in XML/Excel/CSV type format anywhere easily, but I did find this wikipedia page with (roughly) the info I needed. So I wrote up a quick script to scrape that page and load the data into a database.
Any time you need a computer to read the data on a website. Screen scraping is useful in exactly the same instances that any website API is useful. Some websites, however, don't have the resources to create an API themselves; screen scraping is the developer's way around that.
For instance, in the earlier days of Stack Overflow, someone built a tool to track changes to your reputation over time, before Stack Overflow itself provided that feature. The only way to do that, since Stack Overflow has no API, was to screen scrape.
The obvious case is when a webservice doesn't offer reverse search. You can implement that reverse search over the same data set, but it requires scraping the entire dataset.
This may be fair use if the reverse search also requires significant pre-processing, e.g. because you need to support partial matching. The data source may not have the technical skills or computing resources to provide the reverse search option.
I use screen scraping daily, I run some eCommerce sites and have screen-scraping scripts running daily to gather product lists automatically from my suppliers wholesale sites. This allows me to have upto date information on all the products available to me from several suppliers and allows me to flag non-economical margins due to price changes.

What are the cons of a web based application

I am going to write a database application for the camp I work for. I am thinking about writing it in C# with a Windows GUI interface but using a browser as the application is seeming more and more appelaing for various reasons. What I am wondering is why someone would not choose to write an application as a web application. Ex. The back button can cause you some trouble. Are there other things that ayone can think of?
There are plenty of cons:
Speed and responsiveness tend to be significantly worse
Complicated UI widgets (such as tree controls) are harder to do
Rendering graphics of any kind is pretty tricky, 3D graphics is even harder
You have to mess around with logins
A centralised server means clients always need network access
Security restrictions may cause you trouble
Browser incompatibilities can cause a lot of extra work
UI conventions are less well-defined on the web - users may find it harder to use
Client-side storage is limited
The question is.. do enough of those apply to your project to make web the wrong choice?
One thing that was not mentioned here is the level of complexity and knowledge required to generate a good web application. The problem being unless you are doing something very simple, there is no "Single" knowledge or technology that goes into these applications.
For example if you were to write an application for some client server platform.. you may develop in Java or C++. For a complex web application you may have to have expertise in Java, Java Script, HTML, Flash, CSS, Ajax, SQL, J2EE.. etc. Also the components of a web based application are also more numerous, Web Application Server, HTTP Server, Database, Browser.. are tipical components but there could be more.. a client server app is tipical just what it says.. a client application and a Server application. My experience and personal preference is not web based .. web based is great for many things. But even though I am an IT Architect for a leading company that is completely emersed in Web Apps as the solution for everything... The cons are many still.. I do thing the technology will evolve and the cons will go away over time though.
Essentially the real limitations are only through of the platform, being the browser. If you have to account for all browsers in current use that can be a pain due to varying degrees of standards in each of them.
If have control of the which browser to use, that is everyone is on computers that you control on site, and say you install firefox on all of them, you could then leverage the latest Javascript and CSS standards to their fullest in your content delivery.
[edit] You could also look into options like the adobe integrated runtime or "AIR" as an option allowing you to code the front-end with traditional browser based options like xhtml/css/javascript, flash/flex and have the backend hooked up to your database online, only also providing functionality of a traditional desktop app at the same time.
The biggest difference and drawback I see with web applications is state management. Since the web is, by nature, stateless every thing you want to maintain has to be sent back and forth from the server with every request and response. How to efficiently store and retrieve it in a matter with respect to page size and performance is hard to do at times. Also the fact that there is no real standard (at least not that everyone adheres to) for browsers makes consistency really..........fun.
You need to have a network access to the server that you are going to have the web application on (if there are going to be multiple users for the application - which is typically the case).
Actually, there are more pros than cons - if you can give some details about your application, we could help a little more...
It completely depends on the requirements of your project. For the most part, there isn't much web applications cannot do these days. Admittedly, certain applications do belong on the desktop as browsers (while currently advancing, and rapidly), still are not quite there yet. From the advent of applications such as Google Docs, Gmail
There isn't much you -cannot- do on the web. If you're creating a World of Warcraft competitor however, the web is most certainly not the optimal solution. Again, unfortunately we'd need more insight on the application you're building for the camp. The best part about the web is that anyone with a browser can use your application.
Web applications delegate processing to a remote machine. Depending on the amount of processing, this can be a con. Consider a photo editor that's a web app.
Web applications also can't deal with a whole lot of data going back and forth to and from a client. You can watch video online.. when it's compressed. It will be awhile before we see any web-based video editing software.
Browser compatibility is also a hassle. You can't control the look-and-feel of the application 100%.
Vaibhav has a good point. What's your application?
A major one is down time for migrations... users will not expect the application to be down, ever, but realistically it will have to be down for major upgrades. When doing this with a desktop application, the user (or end-user systems admin) is in control of when upgrades happen; with an online app, they're not.
For applications which have large data, performance can be a major problem as you're storing a large number of users' data centrally, which means the IO performance will not be as good as it would be if you gave them all a laptop.
In general scalability gives problems for a server-based app. Desktop applications scale really well.
You can do an awful lot with a web-based app, but it is a lot easier to do certain things with a thick client:
Performance: You get simple access to the full power of the client's CPU.
Responsiveness: Interactivity is fast and easy.
Graphics: You can easily use graphics libraries such as DirectX and OpenGL to create fast impressive graphics.
Work with local files
Peer-to-peer
Deciding whether a web application is a good approach depends on what you are trying to achieve. However here are some more general cons of web applications:
Real integration with desktop apps (e.g. Outlook) is impossible
Drag and drop between your app and the desktop / other running apps
With a web application, there are more privacy concerns, when you are storing user data on your servers. You have to make sure that you don't loose/disclose it and your users have to be comfortable with the idea of storing that data on your servers.
Apart from that, there are many security problems, like Man-in-the-middle attacks, XSS or SQL injections.
You also need to make sure that you have enough computing power and bandwidth at hand.
"Ex. The back button can cause you some trouble."
You'll have to be specific on this. A lot of people make fundamental mistakes in their web applications and introduce bugs in how they handle transactions. If you do not use "Redirect after Post" (also known as Post-Redirect-Get, PRG design), then you've created a bug which appears as a problem with the back button.
A blanket statement that the back button in trouble is unlikely to be true. A specific example would clarify your specific question on this.
The back button really is not that much of an issue if you design your application correctly. You can use AJAX to manipulate parts of the current page, without adding items into the browser history (since the page itself wont change).
The biggest issue with designing web applications has to do with state, and the challenges that need to be programmed around. With a desktop application, state is easy to handle, you can leave a database connection opened, lock the record and wait for the user to make the changes and commit. With a web application, you could lock the record...but then what if the user closes the browser? These things must be overcome in the design of your application.
When designing a web application, make sure that each trip to the server "stands alone" and provides a complete answer. Always re-initialize your variables before performing any work and never assume anything. One of the challenges I ran into once was pulling "pages" of grid data back to the user. In a real busy system, with record additions/modifications happening in real time, the user navigation from page to page would vary greatly, sometimes even resulting in viewing the same set of a few records as new additions were added in-front of the query.

Resources