how to build a website supporting multi-language - multilingual

I am curious how the website www.student.com support multi-language.
Do they use google api or other method instead?
What's the common way to build a website /app to support multi-language?

Every modern browser sends a "Accept-Language"- Header with its preferred languages.
On the server side this can be read and dynamic redirect to a page with the correct language.
Request-Header:
Accept-Language: de-de
Server redirects to:
https://www.student.com/de-de

Related

How do I get 'api.example.com'

I am using NextJS and my understanding is that both the front-end and backend exist in the same location. For development, this would be both http://localhost:3000/about for any user who wants to visit the about page. However this means that any API routes I have in 'pages/api' will be visible whenever I just add that to my url, displaying JSON.
How is it that some sites are able to have the same domain and link but with api.website.com where all there other stuff is on website.com. That way any queries to the api and server are done with api.website.com as opposed to revealing anything on the main link?
It's because most websites have their api on a backend server using libraries like express. pages/api is just a next utility which comes under localhost:3000/api/{get-user} or your deployment uri/api/ which is mostly just used for development/testing/production if they don't have a backend server.

How to use NodeJS to combat social sharing and search engines issues when using single-page frameworks like AngularJS

I read an article about social sharing issues in AngularJS and how to combat by using Apache as a proxy.
The solution is usable for small websites. But if a web app has 20+ different pages, I have to url-write and create static files for all of them. Moreover, a different stack is added to the app by using PHP and Apache.
Can we use NodeJS as the proxy and re-write the url, and what's the approach?
Is there a way to minimize static files creation?
Is there a way to remove proxy, url-rewrite, and static files all together? For example, inside our NodeJS app to check the user agent, if it is facebook bot or twitter and the like, we use request module to download our page and return the raw html code for them, is it a plausible solution?
Normally when someone shares a url in a social network, that social network request that page to generate a preview/thumbnail (aka "scrape").
Most likely those scrapers won't run javascript, so they need a static html version of that page.
Same applies for search engines (even though Google and others are starting to support javascript sites).
Here's a good approach for an SPA to still support scrapers:
use history.pushState in angular to get virtual urls when navigating thru your app (ie. urls without a #)
server-side (node.js or any), detect if a request comes from a user or a bot (eg. check the User-Agent using this lib https://www.npmjs.com/package/is-bot )
if the request url has a file extension, it's probably a static resource request (images, .css, .js), proxy to get the static file
if the request url is a page, for real users, if the url is a page (ie. not a static resource) always serve your index.html that loads your angular app (pro tip: keep this file cached in memory)
if the request url is a page, serve a pre-rendered version of the requested url (they won't run javascript), this is the hard part (side note: ReactJS makes this problem much simpler), you can use a service like https://prerender.io/ they'd take care of loading your angular app, and saving each page as html (if you're curious, they use a headless/virtual browser in memory called PhantomJS to do that, simulating what a real user would do clicking "Save As..."), then you can request and proxy those prerendered pages to bot requests (like social network scrappers). If you want, it's possible to run a prerender instance on your own servers.
All this server-side process I described is implemented in this express.js middleware by prerender:
https://github.com/prerender/prerender-node/blob/master/index.js
(even if you don't like prerender, you can use that code as implementation guide)
Alternatively, here's an implementation example using only nginx:
https://gist.github.com/thoop/8165802

Programmatically login to site with Apache basic authentication from WPF .net application

We have a requirement to open a web site URL in default browser of the client machine from our WPF application. All this time we were using simple Process.Start(URL) and it was working fine. Until now where one of customer has this "URL" behind basic authentication. The site they want us to browse using URL is hosted on Apache web server. From what I know, if we have https://username:password#domain.com it directly takes us in site without the username/password challenge and it works fine in Chrome and Firefox, however newer version on IE stopped supporting it for security reasons.
Can someone suggest some alternate approach to achieve same considering following objectives ?
List item
Browse to URL with passing in username password.
Open default browser in client's desktop machine.
Don't want to user browser control
At minimum should work in 3 browsers - IE, Firefox and Chrome
We cannot make changes to client desktop's machine, so solution shouldn't require any special settings.
The workaround for developers from the Microsoft Support site : http://support.microsoft.com/kb/834489
Workarounds for application and Web site developers
URLs that are opened by objects that call WinInet or Urlmon functions
For objects that use an HTTP or an HTTPS URL that includes user
information when they call a WinInet or Urlmon function such as
InternetOpenURL, rewrite the object to use one of the following
methods to send user information to the Web site: Use the
InternetSetOption function and include the following option flags:
INTERNET_OPTION_USERNAME INTERNET_OPTION_PASSWORD Note For these
flags, the InternetSetOption option must have a handle returned by the
InternetConnect function. Therefore, if the application uses the
InternetOpenUrl function, modify the application to use the
InternetConnect, HttpOpenRequest and HttpSendRequest WinInet
functions. For more information about how to use these functions,
visit the following Microsoft Web sites:
http://msdn2.microsoft.com/en-us/library/Aa384363
http://msdn2.microsoft.com/en-us/library/Aa384233
http://msdn2.microsoft.com/en-us/library/aa384247.aspx Use the
IAuthenticate Interface. For more information about how to use the
IAuthenticate Interface, visit the following Microsoft Web site:
http://msdn2.microsoft.com/en-us/library/ms775080.aspx
Was this option tried ?

Html scraping with JS support

I am trying to scrape a company web page for automation purposes but the embedded scripts in the page prevent me to fully replicate the request. The biggest pain is in the script generated cookies.
I thought of automating IE with Watin but I am not comfortable with this solution under a service application.
What are your advices in this situation?
Thanks in advance.
screen-scraper is another tool (java based) that aims at being easy to use.
The basic idea is as Byron said- you will have to figure out what cookies are getting set (web proxy tools like Fiddler, Charles, or browser extensions like Firebug and Chrome's dev tools will come in handy).
So, you don't necessarily have to read or even execute the javascript on the page to imitate the same requests. Just use a proxy tool to see what cookies your browser is sending to the server, and once you know what cookies the site expects to receive, set them manually in whatever script or tool you use to do your scraping and you'll be golden.
You have several options.
The easiest is to generate the cookies in your script. You will have to read the javascript code yourself and figure out what it is doing and duplicate. Fiddler is always your friend when scraping.
Htmlunit is a java web browser library with JavaScript support. It has no gui and is made for testing web applications.
Selenium will driver a browser much the same way watir does, but it has rich api support for most major languages.

Custom authorization support in Apache webserver

I would like to know if I can hook in a custom authorization support in apache2 webserver. What I want to do is, based on the user logged in, I want to disable few HTML pages being served to UI. The logic to check permissions to static resources (html, css, js etc) for a given user is little complex so I cant use the already available authorization and access control support of apache2. I would want all the requests for static resources in my webapp should first go through my custom module (authorization) which then decides if this request can be served or redirected to some error page.
What is the best way to achieve this?
This article talks about mod_python to implement custom Apache/Subversion Authentication/Authorization, but is quite generic.
Combined with the standard <Location> Directive this can be a solution for your problem.

Resources