Inline many x3d files - inline

I am trying to load many x3d files in a single HTML file using Inline. Some of the scenes load normally but others hang on Loading. It seems there is a limit in the total number of scenes that could be rendered in a single HTML file.
Is this true? Is there any solution to this problem? I want to display as much scenes as needed.
Here is the Link to the example folder where I am calling "Deer.x3d" files several times inside the HTML file.
This example was viewed using Firefox. Using Interned Explorer or Chrome might not work since they don't allow loading local files.

Tested your page also in x_ite which gives better error messages in console. Browsers has limitation in creating WebGL objects like framebuffers. If you create 100 X3D browser within a html page 100 frambuffers or other objects must be created, which are not supported by all browsers.
A better solution is to create preview images of your x3d scenes and on click, or what ever you prefer, open a X3D browser.

Related

How can I download a NextJS page as a static HTML file?

so I have this website made with Next and on a page there are some graphs (the graphs content changes as it fetches an API) and info.
I want to add a button to the page and when pressed it download the page as a HTML file and includes all the JS and CSS in the HTML file instead of separately, does anyone have any idea as to how to approach this problem. (The graphs content should be the same content as it was on the time of downloading)
(The reason why I want to do this is because I want to distribute these files to others and I want to allow them to read it w/o an internet connection)
You can't really download a React 'page' because there are no pages in React to download.
Next further complicates this because it server-side renders everything and rehydrates client-side. If you inspect one of your pages, you'll see the JSON blocks Next uses for data. Look for the __NEXT_DATA__ script (usually in the footer of your page).
I think the two strategies you could use:
Screen-capture of the graphs during your build sequence and push them over to an AWS S3 bucket or similar (cumbersome)
When I ran into a requirement like this, I just made the data for the graph available as a JSON download just below the graph and it satisfied the use case sufficiently.
If you just want to download the assets and take a look, a workaround is probably leveraging the next/export package. This allows you to run yarn build and generate a static export of your entire site. This should include the file you're looking for.
Just some ideas to think through.

in a website's reactjs code i can see lot of unused js/css, but how to reverse map it to sourc code files to actually make changes in code?

I have a website which have loading time of 10 sec, and which we want to reduce to 3 second or so. I have two questions on it:
1. When I do an analysis of bundle loading in the network tab of dev tools, I can see some JS/CSS files which have very less usage in home page load. But since bundle.js contains everything, I can't see what JS part of it (which is unused), is present in which source code file. Is there a tool or way to do so, so that I can reverse map (not covered JS and css), to an actual source code file and modify it?
2. While the bundle is downloading, is there a way to show a spinner or progress bar to the user to wait for some more time, which is obviously better than showing blank page?
Tried lighthouse and analysis of loading using network tools
React supports code splitting : https://reactjs.org/docs/code-splitting.html
If you implement code splitting in your app, then you can use a fallback component.

Creating a loading page for a website

I am currently working on a website and during the loading of the entire page, the page is "jumping" so that I would like to know how I can create an entire ,let's say black screen, where something is displayed during the time the page loads.
That seems quite easy but I was wondering because I have separate files for the header, the footer and the content on how to coordinate all of them, and still have a nice code.
I am working with angularJS. I read a lot about : $viewContentLoaded and also tried ng-cloak, but if any of you has an awesome solution to keep simple, it would be great :)
Thanks
I would say that one approach you can take is use some sort of templating engine for your HTML files (like Jade for example). In this way, you can keep all the code nicely separated in multiple files and using a task runner like Gulp or Grunt you can compile your HTML files before serving the page.
The important difference here is the fact that you won't have to load all the page parts (header, footer and so on) using AJAX requests. Instead, they will already be rendered in your HTML page allowing you to create a nice loader using ng-cloak on your content part.

Access Specific PDF Page in WPF WebBrowser Control

I have a webbrowser control in my application that is used to display pdf files that have been created in iTextSharp and are stored locally on the hard drive.
I would like to be able to navigate the file (next, previous, first, last, toc) from my application rather than using the inbuilt nav of the reader in the browser.
I have seen that you can navigate to specific pages by using
Browser.Navigate("filename.pdf#page=?);
This works the first time but when trying to navigate to a different page, it makes the browser disappear completely with no errors. However, I can reload the file without problem if I don't have the #page=? suffix on the file url though. Any ideas on this?
Alternatively, is there anyway in iTextSharp of adding something to the file to allow for it to be navigated from an external command?
All the official parameters that can be used to navigate through a PDF using parameters in the query string after the ? character are listed in a document published by Adobe: Parameters for Opening PDF Files
You already mentioned the page parameter. Another option could be using named destinations: nameddest=destination. In this case, you need to add the anchor with name destination to the file using iTextSharp.
Note that not all viewers implement these parameters. Adobe supports them in Adobe Reader and in the Adobe Reader plug-in, but there is no guarantee that they will work in pdf.js (Firefox), Pdfium (Google Chrome),... If your browser disappears when using an open parameter, you may have hit a bug in the browser or the viewer plug-in that causes the browser to crash. iTextSharp nor iText can crash a browser ;-)
There are no other ways you can navigate a PDF from an external application. The only thing you can do, is to add JavaScript to the PDF so that it always opens at the same page. This is done using an open action. I don't think this solves your problem as it would mean that you have to change the PDF file every time you want it to open at a different page.

Getting images with HTTP Request in C

I am writing a program in C that acts like a proxy server in a Linux system: Client asks it for a web page,
it sends an HTTP GET Request to a distant server, and it gets the servers response (web page), which is saved in an .html file.
Here goes my problem: Most web sites got some references to images, so when i try to view the .html file proxy created, the images don't appear.
I have searched a lot, but found nothing..Is there a way to write some code to GET images too?
Thank you in advance
You're going to have to write code that parses the HTML file you get back and looks for image references (img tags), then queries the server for those image files. This is what web browsers are doing under the hood.
You have an additional problem though which is that the image references in the HTML file are to the original server. I'm assuming that since they don't load for you the server that returned the original HTML isn't available. In that case after you get each image file you will need to give it a name on the local filesystem and then alter the reference in the HTML (programmatically) to point to your new local image name.
So for example:
<img src='http://example.com/image1.png'>
would become
<img src='localImage1.png'>
If you're querying arbitrary websites then you'll also find that there are various other files you'll need to do the same with like CSS files and JavaScript files. In general its hard to mirror arbitrary web pages accurately - browsers have complex object models they use to interpret web pages because they have to deal with things like CSS and Javascript and you may need to be able to 'run' all that dynamic code to even be sure what files to download from the server (e.g. JavaScript including other JavaScript etc).

Resources