i want to show recently viewed nodes in drupal7. It is showing up for individual user only, i need it to show all the users recently visited the site. I used recently read and session_api but it is showing the nodes for particular users recently viewed nodes. I want it to display all nodes visited by all users(authenticated &anonymous)
Give this a try: https://drupal.org/project/radioactivity
( introduction tutorial here : http://www.wunderkraut.com/blog/radioactivity-2-basics/2011-12-05 )
It's pretty flexible, and should suit your use case
Related
I have 2 existing projects that are independent of each other, and I was tasked to bring them together under a suite of some sort and share some small data between them. The problem is one platform is using ReactJs and the other using VueJs.
Example:
To give you a small example, it's like having Facebook and Instagram using different JS frameworks, and bringing them together under one platform let's call it Meta so the user goes to one platform and finds all products that the company offers.
(Example of the data that needs to be shared: Dark/Light Mode, User is logged, ....)
And is it possible to wrap both apps inside another, like having a whole new project consisting of a Navbar and a body, and the body switches between showing the first and the second platform using the navbar links?
Please if you have any suggestions throw them at me.
I want to build a a webcrawler that goes randomly around the internet and puts broken (http statuscode 4xx) image links into a database.
So far I successfully build a scraper using the node packages request and cheerio. I understand the limitations are websites that dynamically create content, so I'm thinking to switch to puppeteer. Making this as fast as possible would be nice, but is not necessary as the server should run indefinetely.
My biggest question: Where do I start to crawl?
I want the crawler to find random webpages recursively, that likely have content and might have broken links. Can someone help to find a smart approach to this problem?
List of Domains
In general, the following services provide lists of domain names:
Alexa Top 1 Million: top-1m.csv.zip (free)
CSV file containing 1 million rows with the most visited websites according to Alexas algorithms
Verisign: Top-Level Domain Zone File Information (free IIRC)
You can ask Verisign directly via the linked page to give you their list of .com and .net domains. You have to fill out a form to request the data. If I recall correctly, the list is given free of charge for research purposes (maybe also for other reasons), but it might take several weeks until you get the approval.
whoisxmlapi.com: All Registered Domains (requires payment)
The company sells all kind of lists containing information regarding domain names, registrars, IPs, etc.
premiumdrops.com: Domain Zone lists (requires payment)
Similar to the previous one, you can get lists of different domain TLDs.
Crawling Approach
In general, I would assume that the older a website, the more likely it might be that it contains broken images (but that is already a bold assumption in itself). So, you could try to crawl older websites first if you use a list that contains the date when the domain was registered. In addition, you can speed up the crawling process by using multiple instances of puppeteer.
To give you a rough idea of the crawling speed: Let's say your server can crawl 5 websites per second (which requires 10-20 parallel browser instances assuming 2-4 seconds per page), you would need roughly two days for 1 million pages (1,000,000 / 5 / 60 / 60 / 24 = 2.3).
I don't know if that's what you're looking for, but this website renders a new random website whenever you click the New Random Website button, it might be useful if you could scrape it with puppeteer.
I recently had this question myself and was able to solve it with the help of this post. To clarify what other people have said previously, you can get lists of websites from various sources. Thomas Dondorf's suggestion to use Verisign's TLD zone file information is currently outdated, as I learned when I tried contacting them. Instead, you should look at ICANN's CZDNS. This website allows you to access TLD file information (by request) for any name, not just .com and .net, allowing you to potentially crawl more websites. In terms of crawling, as you said, Puppeteer would be a great choice.
I am trying to create a database that can be used in my office. What I am trying to do is create a form where a user can input a link such as "www.stackoverflow.com" and it pulls up information about that link. For example, when we are reviewing documents, if we saw the link "www.stackoverflow.com" we would have to create an Alternate Text that instead would read "Stack Overflow Website" if it was being read by a screen reader.
We deal with so many links across so many documents that it can be hard for my team to be consistent document to document, person to person. So having this database would allow someone to enter the url and then it would pull up the alt text we have for it in multiple languages. I am learning and trying to figure out databases but I am not very well knowledged on how to put it all together within Access.
I would appreciate any kind of help or pushes in the right direction.
Anyone seen an android app created with App Inventor that is a catalog?
I want to create app with a static DB, when a user selects a number it will display the item name, details about the item and image of that item.
I suggest this easy way if you're a beginner at programming in general and you know a bit of HTML and Javascript.
Have your database in JSON, most database management systems will export your tables in JSON format.
And here is how to insert your javascript in App inventor
http://puravidaapps.com/javascript.php
Using a javascript library will make the process even easier.
Let me suggest to store your data in a csv file. Just add it as asset into your project and import it using the File component. Your catalog could be displayed using a listpicker component.
Start doing the tutorials to learn the basics of App Inventor. Also take a look at How to work with Lists by Saj
How to work with Lists and Lists of lists (pdf) by appinventor.org
My suggestion would be to use a list stored in an online database like firebase or a custom tinyWebDB. That way you can change the items, their prices, descriptions without having to build the app all over again. As a bonus, you could use that database to track purchases. If by static DB you mean a database on the device like tinyDB, I would still recommend using lists because they are so easy to work with.
You could have 3 paired lists. One list could be for images, one for the description, and one for the name. Then when a user picks the item number, say 3, you display item 3 of names, item 3 of descriptions, and item 3 of images.
I work for a business that has offices in 4 regions, UK, US, Dubai and Singapore. Each region offers slightly different products. I’m planning on using a mega menu dropdown in the primary navigation.
My question is, is it possible to display different menu items depending on what region the user is in? I would’ve thought it would be possible using GEOIP however I’m not sure. I don’t want to build 4 separate websites as the content is virtually the same.
If anyone come across this issue before?
Thanks
Mark
It is possible to do this with IP Geolocation using a product such as MaxMind. Depending on your particular needs, you could use a web service, a JavaScript service, or a downloadable database:
http://dev.maxmind.com/geoip/
That said, geolocation is not perfect and you may occasionally display the wrong content to the user. Since having four separate sites seems to be an option, you might want to consider one site with four different country-specific domains and then display the menu based on the domain that the user visits. For instance, this modules seems to do just that:
https://drupal.org/project/domain_menu_access
I am sure there are other modules that can be used in this way.