Where to find definitions of pre-build images of SageMaker? - amazon-sagemaker

I am trying to build on top of the SageMaker pre-built image sagemaker-base-python-310 (listed here). For example, start from pre-built image and add additional requirements to it.
Who could point where to find the definition of the container underlying such image (e.g. DockerFile)?
What I tried
Searched on AWS GH Repos
Searched in ECS

SageMaker Studio images aren't available publicly. The DLC based images (such as Pytorch, TF, etc.) are essentially built on top of the frameworks (see https://github.com/aws/deep-learning-containers), but Base Python, Data Science etc. are not open source.
If you're looking to add packages and such as part of customization, I'd recommend using LCC scripts.

Related

How to allow additional 3rd party React modules to be installed after compilation of a static web server

Struggling with collision of technical terms (most especially the term "plugin" which has about seven different meanings within the react development stack).
Short question:
Is there a way to pre-compile static webpack modules that can be installed separately from a main static react web application, while still sharing modules contained in the main web application? (The question as best I can formulate it using my relatively naïve react developer skills). I'd like the ability to plug in web user interface components supplied by 3rd party developers after the fact. i.e. installable runtime React UI components, not requiring react compilation at install time.
Details:
I have a static React web app that allows remote control of audio plugins (specifically LV2 audio plugins). It's a single-page static react app (that communicates via we sockets with the running application), hosted by a static C++ web server. Realtime and IOT agility requirements make a python hosted dynamic web server an runtime compilation an unattractive prospect (https://github.com/rerdavies/pipedal)
What I want to do is allow extension of the web UI using separate bundles provided by 3rd-party LV2 plugins. The ideal solution would be to allow static webpack bundles pre-compiled by the lv2 plaigns and placed in /usr/lib/lv2/<pluginname.lvw>/resource directories to be consumed by the web app at runtime. I'm using a custom C++ web server, so redirecting URLs into the /usr/lib/lv2/xx/resource directories is straightforward.
The main app would be distributed one apt package. Lv2 plugins would be compiled (potentially by 3rd party developers) against an "sdk package" provided by the main app build, after the main app was built, and then distributed in separate packages. Ideally, I'd like to pre-compile the ui code for the plugins to static webpack modules before their installers are built.
I more-or-less understand how I would do this if I were using raw CLI tools and configuration files (tsc, webpack, babel). But I can't help thinking I would be reinventing a wheel. (And I do have concerns that I'm going to incur serious version-dependency problems).
I would like to code-share the base modules (react, #mui controls, and a limited set of app-supplied components and interfaces).
I see the path through the various tools to make this happen, using my own custom build script, I think. I can get the typescript compiler to do code-splitting; I can probably figure out how to get the babel transpiler to do the right thing. And I think I understand how to write webpack configuration files that will process do sharing of modules from the main app. And a likely path to build and distribute an npm package to do the setup and build of LV2 plugin projects. And how to write supporting CMake build rules for building and installing such packages. &c. But I'm concerned that I'm going to go down a large rabbit hole trying to reinvent something that surely must exist already. And I can imagine seven thousand ways for this to go horribly wrong. :-P
So far, I have implemented the TypeScript compiler portion of the build procedure. And writing various bits to dynamicall intercept and service resource requests in the web server is trivial. But it has become painfully obvious that I also need to do babel and webpack build steps as well.
I haven't yet looked at the react-scripts package contents to see if I can steal code to build what I want there. Perhaps that's a viable path.
Is there a way to do this with off-the-shelf npm packages and off-the-shelf npm build procedures? I can find all kinds of bits to get me part way; but the integration of all the bits is rather daunting. Should I just do the deed, and start writing my own custom build scripts to make this happen?

How the GitHub store your repository files?

I'm feeling stupid, but I want to know how GitHub and Dropbox store user files, because I have a similar problem and I need to store user's project files .
Is it just like storing project files somewhere in the server and refer to the location as a field in the database, or there are other better methods ?
Thanks.
GitHub uses Git to store repositories, and accesses those repos from their Ruby application. They used to do this with Grit, a Ruby library. Grit was written to implement Git in Ruby but has been replaced with rugged. There are Git reimplementations in other languages like JGit for Java and Dulwich for Python. This presentation gives some details about how GitHub has changed over the years and is worth watching/browsing the slides.
If you wanted to store Git repositories, what you'd want to do is store them on a filesystem (or a cluster thereof) and then have a pointer in your database to point to where the filesystem is located, then use a library like Rugged or JGit or Dulwich to read stuff from the Git repository.
Dropbox stores files on Amazon's S3 service and then implements some wrappers around that for security and so on. This paper describes the protocol that Dropbox uses.
The actual question you've asked is how do you store user files. The simple answer is... on the filesystem. There are plugins for a lot of popular web frameworks for doing user file uploads and file management. Django has Django-Filer for instance. The difficulty you'll encounter in rolling your own file upload management system is building a sensible way to do permissions (so users can only download the files they are entitled to download), so it is worth looking into how the various framework plugins do it.

DNN - differences between Document Library and Digital Asset Management

The documentation on DNN sites speak of two default modules namely Document Library and Digital Asset Management. The two modules seem to be quite similar in functionality i.e. they both provide a mechanism to handle documents. But I haven't found any documentation that explains the different scenarios in which they are to be used. Could anybody explain the different scenarios they are meant to be used in? And which of these modules provides more flexibility in terms of URL management and handling large number of documents of the order of 60,000 to 70,000?
Digital Asset Management is the "File Manager" module in current versions of DNN.
The Document Library module is intended for collections of documents managed outside of the file manager. It is more suited for presenting a display of documents, descriptions, download links, etc.
For managing large number of modules, you might want to look at the DMX Pro module from Bring2Mind.net.
Or, For managing large number of files, you might want to look at the DNNUserFiles module from Evotiva.com (Disclaimer: Yes, I'm the author)

lightwave 3d 9.6 sdk trouble

Are there tutorials for the SDK (or at least an example), about how to create an export plugin (extract polygons from scene)?
In Lightwave, you're not forced to write an export plugin to extract the polygons from a scene/object : the LWO and LWS are documented (enough) to parse them quite easily.
The file formats documentation are in filefmts folder of the SDK. You can find libs that parse Lightwave files also, such as Open Asset Import library.
If you still need to do it as a plugin, there's a sample plugin for Modeler, in the sample/Modeler/Input-Output/vidscape folder.
You can find a reasonable amount of LW export plugins on the web. Some of them are compiled .p plugins, but some are Python .py or LScript *.ls. Two latter can be edited and tweaked with text editor of your choice to individual needs quite easily.
There are wikis on the web about Lightwave API commands available from script as well.

Local data sources for GIS Map plugin?

I am developing an ASP.NET intranet application that needs to have an interactive map interface.
There are some pretty neat Silverlight mapping plugins that I think could work well, specifically:
ArcGIS Silverlight API: http://resources.esri.com/arcgisserver/apis/silverlight/
DeepEarth mapping framework: http://www.codeplex.com/deepearth
There are no doubt many more plugins out there that will allow easy interaction between ASP.NET and the mapping interface (please suggest some if I've missed the major players).
My major concern however is using these tools with local data sources. What is the best option here? All I need is some basic satellite imagery of moderate resolution and some overlays of cities and country borders. Can I download a dataset of these images? I dont really care if they are up to date or not, so long as the photos were taken in the last 20 years.
I want to be able to use local data sources because external internet connections could be very slow due to the nature of the organisation's work, Intranet communication will always be much faster.
To summarise:
1.) where can I find a dataset of moderate quality global satellite imagery?
2.) Which web based mapping plugin will allow me to plug into such a data source?
If I can get something like the DeepEarth demo (http://www.codeplex.com/deepearth) but grabbing the data from internal company servers I would be very happy.
You can check out the free geodata listing at:
- http://www.freegis.org/database/?cat=1
Or have a look at:
http://downloads.cloudmade.com/
where cloudmade provides downloadable openstreetmap data converted to shape files.

Resources