Server side responsiveness good practices - cakephp

I have the following scenario: I work with CakePHP and Twitter Bootstrap.
I'm using a lot of responsiveness and writing lots of HTML that change for each screen size.
I was thinking about detecting the screen size and saving it on the server side, so I can write only the HTML that will really be useful on that page. Since the user rarely changes the window size, won't hurt to hit F5.
Is it a good practice? What do you suggest?

That job is clearly supposed to be done by the browser and your task it to make that happen by using the right CSS for the right device. As somebody already mentioned in the comment, your device could rotate, devices have different DPI. I'm pretty sure you can't pass the DPI to the server without explicitly reading it via JS (if possible at all) and passing it to the server in an AJAX call.
You can't rely on your server side device detection alone nor is it good to render Markup conditionally for that purpose it just increases the maintenance amount you have to do by X for each device.
I recommend you to read a little more about repsonsive webdesign, there are plenty of articles and books about it these days.

Not really a good practice mate. You'll increase your application load and you'll waste unnecessary space just to save a few lines of code. Imagine if you're making a website like facebook, how much server space would it require to just store that piece of information for millions of users. Responsive design is a must these days in css and you should just give general values for ranges of resolutions. There's a good paid tutorial on team tree house's website i think, and others are freely available on youtube etc.

Related

ExtJS 3.* chart uses too much memory

I am using Ext JS line chart with over 5000+ data. It uses too much memory especially on IE. How can I fix this leak or why it causes?
Both displaying 5k+ data points and processing it on client side are bad design decisions and should be avoided. Nobody can possibly comprehend this much data in one chart; that should be 10-12 points max or it becomes meaningless white noise. Client side processing in JavaScript is expensive, especially in older IEs; not only that, but you're also wasting time and resources transferring the data that is not going to be used.
The best solution is to modify your server side method to filter or aggregate data and provide UI access to these features.
Actually I have same Question , I do agree its not worth loading that much data but there are cases where I need to show drilling data in my ExtJs window.
Whose image is displayed below :
Here the points may go upto 15k data. Actually user wants to see the variations here rather than actual data. But he may need mouse over / zoom .
I achieved this with HTML5 plugin and adding this as an iframe.
Any words to achieve in ExtJs or how to go for this.

Content Management System targeting Mobile Devices

If I want to implement CMS for Mobile Devices, what kind of points should take into account?
For example, make page size smaller, use optimized (small) pictures. Any other ideas?
Also what kind of rules can be applied while converting web-pages that WERE designed for Desktop Browsers, to the ones that are easily displayed in Mobile Browsers.
I know that Mobile Devices widely vary in there capacity and property, but still trying list out some rules.
Also any other ideas, suggestions, questions and advices are welcome on this topic.
Thanks for your opinions and answers.
Short foreword, all the things I'm listing below are something the main product of the company I work for already does or has worked out a solution for, the whole goal of this answer is to give you pointers.
Identifying the phone
When dealing with mobile as a web context, it's absolutely imperative you identify the phone correctly. That should be the highest priority. Here's couple of issues with identifying phones and their features:
Do not use userAgent.contains("iPhone") detection scheme. There's already loads of web bots and other applications which contain iPhone in their user agent string and thus you'd identify them incorrectly.
Not all phones even send User-Agent headers. However some of those send UAProf URL:s which contain all the phone's features in RDF format. Unfortunately this introduces the next two problems:
Obviously you won't have access to every single device data out there and you're bound to use public data repositories such as WURFL. These databases are however incomplete, slightly lagging behind or don't contain data you'd like to have. They are your best bet for initial data set though.
UAProfs lie. Yes, they contain false information - lots of it! Partly this is because the manufacturers forget to update the XML:s and partly because the UAProf files are written during the development of the phone and as we know, features do change during development.
When relying on a feature, make sure you're not relying on a specific version of a specific phone. For example BlackBerry has a feature called Tile which is basically a really fancy bookmark but you can't just serve it to all the BlackBerry phones, you have to identify the operating system version of the actual phone to serve the right variation of the Tile. Same goes for touch screen, iPhone wasn't the first one with touch screen and most certainly isn't the only one either - also don't expect a situation where the device has only one form of input, for example Nokia N900 has a touch screen, physical keyboard and even stylus.
Creating the actual pages
Thankfully this is something people have agreed upon and when creating the pages, you're supposed to use XHTML-MP. But oh how one would wish things were this easy...
All phones have differing level of XHTML-MP/CSS support. As an example, if I remember correctly, some older BlackBerries don't support background-color for block elements. Or header tags. We've also seen incorrect ordering of span elements when there's several in a row. Oh and for some reason tables are really hard. Basically, you have to go low on markup/styling tricks.
You can't test the existence of the feature by using the feature itself. If you want to detect JavaScript support, you could think that adding a bit of JavaScript to the page for that purpose alone would work, right? Nope, that crashes a significant percent of mobile phones visiting your site. Sure, new phones don't crash but not everyone has bought their phones in the last 12 months. Also mobile specific JavaScript API:s differ per manufacturer, as yet another example there's currently at least three different API:s for JavaScript-based geolocation detection, none of them interoperable with the other ones.
Add all these on top of normal CMS features (security, content management and transformation, caching, modularity, visitor tracking and whatnot) and you should have some sort of picture of how everything affects everything and how you really should consider the cost of making your own.
In fact even though this is sort of against the general spirit of SO, I'd strongly suggest for you to get a readily made solution such as ours and use that instead for your site building needs. After all, our product has seven years worth of specialized development under its hood.
A couple that we used ...
A cms targeted for mobile devices should be able to detect the device type and detect (or have a database of) screen resolutions so that content, particularly images, can be scaled appropriately.
The rendering engine should also be able to determine if the device can handle HTML or WAP and switch markup languages appropriately.
Paging capability on the output as opposed to rendering very large pages (if content mages are large) is also helpful.
Clean integration with the corresponding web site CMS (so content doesn't need to be dual produced) is also helpful if tere is, in fact, a corresponding large form web site.

Looking for an example of when screen scraping might be worthwhile

Screen scraping seems like a useful tool - you can go onto someone else's site and steal their data - how wonderful!
But I'm having a hard time with how useful this could be.
Most application data is pretty specific to that application even on the web. For example, let's say I scrape all of the questions and answers off of StackOverflow or all of the results off of Google (assuming this were possible) - I'm left with data that is not very useful unless I either have a competing question and answer site (in which case the stolen data will be immediately obvious) or a competing search engine (in which case, unless I have an algorithm of my own, my data is going to be stale pretty quickly).
So my question is, under what circumstances could the data from one app be useful to some external app? I'm looking for a practical example to illustrate the point.
It's useful when a site publicly provides data that is (still) not available as an XML service. I had a client who used scraping to pull flight tracking data into one of his company's intranet applications.
The technique is also used for research. I had a client who wanted to compare the contents of several online dictionaries by part of speech, and all of these sites had to be scraped.
It is not a technique for "stealing" data. All ordinary usage restrictions apply. Many sites implement CAPTCHA mechanisms to prevent scraping, and it is inappropriate to work around these.
A good example is StackOverflow - no need to scrape data as they've released it under a CC license. Already the community is crunching statistics and creating interesting graphs.
There's a whole bunch of popular mashup examples on ProgrammableWeb. You can even meet up with fellow mashupers (O_o) at events like BarCamps and Hack Days (take a sleeping bag). Have a look at the wealth of information available from Yahoo APIs (particularly Pipes) and see what developers are doing with it.
Don't steal and republish, build something even better with the data - new ways of understanding, searching or exploring it. Always cite your data sources and thank those who helped you. Use it to learn a new language or understand data or help promote the semantic web. Remember it's for fun not profit!
Hope that helps :)
If the site has data that would benefit from being accessible through an API (and it would be free and legal to do so), but they just haven't implemented one yet, screen scraping is a way of essentially creating that functionality for yourself.
Practical example -- screen scraping would allow you to create some sort of mashup that combines information from the entire SO family of sites, since there's currently no API.
Well, to collect data from a mainframe. That's one reason why some people use screen scraping. Mainframes are still in use in the financial world and often it's running software that has been written in the previous century. The people who wrote it might already be retired and since this software is very critical for these organizations, they really hate it when some new code needs to be added. So, screenscraping offers an easy interface to communicate with the mainframe to collect information from the mainframe and then send it onwards to any process that needs this information.
Rewrite the mainframe application, you say? Well, software on mainframes can be very old. I've seen software on mainframes that was over 30 years old, written in COBOL. Often, those applications work just fine and companies don't want to risk rewriting parts because it might break some code that had been working for over 30 years! Don't fix things if they're not broken, please. Of course, additional code could be written but it takes a long time for mainframe code to be used in a production environment. And experienced mainframe developers are hard to find.
I myself had to use screen scraping too in a software project. This was a scheduling application which had to capture the output to the console of every child process it started. It's the simplest form of screen scraping, actually, and many people don't even realize that if you redirect the output of one application to the input of another, that it's still a kind of screen scraping. :)
Basically, screen scraping allows you to connect one (web) application with another one. It's often a quick solution, used when other solutions would cost too much time. Everyone hates it, but the amount of time it saves still makes it very efficient.
Let's say you wanted to get scores from a popular sports site that did not offer the information available with an XML feed or API.
For one project we found a (cheap) commercial vendor that offered translation services for a specific file format. The vendor didn't offer an API (it was, after all, a cheap vendor) and instead had a web form to upload and download from.
With hundreds of files a day the only way to do this was to use WWW::Mechanize in Perl, screen scrape the way through the login and upload boxes, submit the file, and save the returned file. It's ugly and definitely fragile (if the vendor changes the site in the least it could break the app) but it works. It's been working now for over a year.
One example from my experience.
I needed a list of major cities throughout the world with their latitude and longitude for an iPhone app I was building. The app would use that data along with the geolocation feature on the iPhone to show which major city each user of the app was closest to (so as not to show exact location), and plot them on a 3D globe of the earth.
I couldn't find an appropriate list in XML/Excel/CSV type format anywhere easily, but I did find this wikipedia page with (roughly) the info I needed. So I wrote up a quick script to scrape that page and load the data into a database.
Any time you need a computer to read the data on a website. Screen scraping is useful in exactly the same instances that any website API is useful. Some websites, however, don't have the resources to create an API themselves; screen scraping is the developer's way around that.
For instance, in the earlier days of Stack Overflow, someone built a tool to track changes to your reputation over time, before Stack Overflow itself provided that feature. The only way to do that, since Stack Overflow has no API, was to screen scrape.
The obvious case is when a webservice doesn't offer reverse search. You can implement that reverse search over the same data set, but it requires scraping the entire dataset.
This may be fair use if the reverse search also requires significant pre-processing, e.g. because you need to support partial matching. The data source may not have the technical skills or computing resources to provide the reverse search option.
I use screen scraping daily, I run some eCommerce sites and have screen-scraping scripts running daily to gather product lists automatically from my suppliers wholesale sites. This allows me to have upto date information on all the products available to me from several suppliers and allows me to flag non-economical margins due to price changes.

What are the cons of a web based application

I am going to write a database application for the camp I work for. I am thinking about writing it in C# with a Windows GUI interface but using a browser as the application is seeming more and more appelaing for various reasons. What I am wondering is why someone would not choose to write an application as a web application. Ex. The back button can cause you some trouble. Are there other things that ayone can think of?
There are plenty of cons:
Speed and responsiveness tend to be significantly worse
Complicated UI widgets (such as tree controls) are harder to do
Rendering graphics of any kind is pretty tricky, 3D graphics is even harder
You have to mess around with logins
A centralised server means clients always need network access
Security restrictions may cause you trouble
Browser incompatibilities can cause a lot of extra work
UI conventions are less well-defined on the web - users may find it harder to use
Client-side storage is limited
The question is.. do enough of those apply to your project to make web the wrong choice?
One thing that was not mentioned here is the level of complexity and knowledge required to generate a good web application. The problem being unless you are doing something very simple, there is no "Single" knowledge or technology that goes into these applications.
For example if you were to write an application for some client server platform.. you may develop in Java or C++. For a complex web application you may have to have expertise in Java, Java Script, HTML, Flash, CSS, Ajax, SQL, J2EE.. etc. Also the components of a web based application are also more numerous, Web Application Server, HTTP Server, Database, Browser.. are tipical components but there could be more.. a client server app is tipical just what it says.. a client application and a Server application. My experience and personal preference is not web based .. web based is great for many things. But even though I am an IT Architect for a leading company that is completely emersed in Web Apps as the solution for everything... The cons are many still.. I do thing the technology will evolve and the cons will go away over time though.
Essentially the real limitations are only through of the platform, being the browser. If you have to account for all browsers in current use that can be a pain due to varying degrees of standards in each of them.
If have control of the which browser to use, that is everyone is on computers that you control on site, and say you install firefox on all of them, you could then leverage the latest Javascript and CSS standards to their fullest in your content delivery.
[edit] You could also look into options like the adobe integrated runtime or "AIR" as an option allowing you to code the front-end with traditional browser based options like xhtml/css/javascript, flash/flex and have the backend hooked up to your database online, only also providing functionality of a traditional desktop app at the same time.
The biggest difference and drawback I see with web applications is state management. Since the web is, by nature, stateless every thing you want to maintain has to be sent back and forth from the server with every request and response. How to efficiently store and retrieve it in a matter with respect to page size and performance is hard to do at times. Also the fact that there is no real standard (at least not that everyone adheres to) for browsers makes consistency really..........fun.
You need to have a network access to the server that you are going to have the web application on (if there are going to be multiple users for the application - which is typically the case).
Actually, there are more pros than cons - if you can give some details about your application, we could help a little more...
It completely depends on the requirements of your project. For the most part, there isn't much web applications cannot do these days. Admittedly, certain applications do belong on the desktop as browsers (while currently advancing, and rapidly), still are not quite there yet. From the advent of applications such as Google Docs, Gmail
There isn't much you -cannot- do on the web. If you're creating a World of Warcraft competitor however, the web is most certainly not the optimal solution. Again, unfortunately we'd need more insight on the application you're building for the camp. The best part about the web is that anyone with a browser can use your application.
Web applications delegate processing to a remote machine. Depending on the amount of processing, this can be a con. Consider a photo editor that's a web app.
Web applications also can't deal with a whole lot of data going back and forth to and from a client. You can watch video online.. when it's compressed. It will be awhile before we see any web-based video editing software.
Browser compatibility is also a hassle. You can't control the look-and-feel of the application 100%.
Vaibhav has a good point. What's your application?
A major one is down time for migrations... users will not expect the application to be down, ever, but realistically it will have to be down for major upgrades. When doing this with a desktop application, the user (or end-user systems admin) is in control of when upgrades happen; with an online app, they're not.
For applications which have large data, performance can be a major problem as you're storing a large number of users' data centrally, which means the IO performance will not be as good as it would be if you gave them all a laptop.
In general scalability gives problems for a server-based app. Desktop applications scale really well.
You can do an awful lot with a web-based app, but it is a lot easier to do certain things with a thick client:
Performance: You get simple access to the full power of the client's CPU.
Responsiveness: Interactivity is fast and easy.
Graphics: You can easily use graphics libraries such as DirectX and OpenGL to create fast impressive graphics.
Work with local files
Peer-to-peer
Deciding whether a web application is a good approach depends on what you are trying to achieve. However here are some more general cons of web applications:
Real integration with desktop apps (e.g. Outlook) is impossible
Drag and drop between your app and the desktop / other running apps
With a web application, there are more privacy concerns, when you are storing user data on your servers. You have to make sure that you don't loose/disclose it and your users have to be comfortable with the idea of storing that data on your servers.
Apart from that, there are many security problems, like Man-in-the-middle attacks, XSS or SQL injections.
You also need to make sure that you have enough computing power and bandwidth at hand.
"Ex. The back button can cause you some trouble."
You'll have to be specific on this. A lot of people make fundamental mistakes in their web applications and introduce bugs in how they handle transactions. If you do not use "Redirect after Post" (also known as Post-Redirect-Get, PRG design), then you've created a bug which appears as a problem with the back button.
A blanket statement that the back button in trouble is unlikely to be true. A specific example would clarify your specific question on this.
The back button really is not that much of an issue if you design your application correctly. You can use AJAX to manipulate parts of the current page, without adding items into the browser history (since the page itself wont change).
The biggest issue with designing web applications has to do with state, and the challenges that need to be programmed around. With a desktop application, state is easy to handle, you can leave a database connection opened, lock the record and wait for the user to make the changes and commit. With a web application, you could lock the record...but then what if the user closes the browser? These things must be overcome in the design of your application.
When designing a web application, make sure that each trip to the server "stands alone" and provides a complete answer. Always re-initialize your variables before performing any work and never assume anything. One of the challenges I ran into once was pulling "pages" of grid data back to the user. In a real busy system, with record additions/modifications happening in real time, the user navigation from page to page would vary greatly, sometimes even resulting in viewing the same set of a few records as new additions were added in-front of the query.

What's the best way to make a mobile friendly site?

Speaking entirely in technology-free terms, what is the best way to make a mobile friendly site? That is, I want to make a site that will work on a regular computer but also have mobile versions of the pages. Should I rewrite each page? The pages will probably have different functionality, so should I rewrite the backend code? Should it be an effectively different site with the same database?
On my site, I detect user agent, and for known mobile browsers I serve a different stylesheet, with some larger/less necessary items left off some pages. The backend doesn't really change.
I added a mobile presentation layer to an operational site about a year ago. Based on the architecture of the site (hopefully this isn't too technology dependent for you) I added a new set of JSPs to accommodate mobile browsers (sidenote: see http://wurfl.sourceforge.net/ for a great way to build mobile pages independent of browser type). Additionally some of the back-end functionality was changed due to the limited functionality of most mobile browsers. So, in short, the integration wasn't as painful as one would expect.
Good luck!
This is a pretty broad question, but here goes:
If the site is primarily about the content, meaning it's not so much a service you use as it's a publication you read, then I'd try to avoid publishing two sites wherever possible. Concentrate on simple presentation using mature technologies that mobile browsers can handle fairly well.
If it's essentially a software application delivered via the network, then things get trickier, because you're going to want to consider the UI of the mobile device, and how it differs from the desktop.
This should go without saying, but either way, if you have many mobile users, you should keep that in mind when you author content for the site. Formats, length, voice, etc.
In addition to the WURFL / WALL capabilities system that todd mentioned, there are Java Server Faces libraries available that use alternate WML renderkits for mobile phones.
One way I have done it in the past was to make sure my data was abstracted well in the data tier and then use separate middle tier models to pull what was appropriate. In my case the application was a weather application and the display methods of the target devices were really limited so we opted to only show the user the essentials on the mobile devices while the website was full featured. That was probably 10 years ago when WAP was big. But these days with devices getting bigger screens, better bandwidth, you may want to consume and display the exact same data with a different view model.
I never really know what type of application will need to consume the data in the future. We do a lot of apps across platforms but the domain model rarely changes. So I end up using the same middle tier objects where I can and pulling that data in different clients. A good example of this is a recent project where we had a rich internet application (widget), a full website, and a web service consuming the same data. Data abstraction in the middle-tier really shines in this environment.
On a very high level of abstraction, there are two main caveats with mobile devices: (1) their screen is small, (2) their network connection is intermittent. This basically means that your need to present the content so that it looks fine even on a small (variable size) screen, and preferably make it cacheable too so that your users can browse the content while offline. Then there's also the problem of low bandwidth and high latency, but those are slightly less important nowadays.
This is a very thorough overview of how to make a site mobile, though i hope its fair to say that there will always be different requirements for anyone seeking to go mobile. If you have a Blog, then you could just as easily make it mobile friendly using Mippin Mobilizer; its free, provides branding customisation tools, and with a big audience already browsing a wide mix of mobilized content, there's opportunities to generate advertising revenue around your blog.
This is because the Mippin Mobilized blog then becomes part of a much wider community of content, people, news, blogs, listings, all connecting around content, and much more at the mobile site:
http://mippin.com (on a mobile browser.)
Take a look at the Mobilizing tool because it shows off what the site can do in a second:
www.mippin.com/mobilizer
Only if you have a blog of course...

Resources