Sagemaker Studio Associate Notebook with Running App - amazon-sagemaker

I am running Sagemaker Studio and have per the picture below, a running instance of a specific instance type. This was created when i created a new notebook and picked the instance type and kernel. But that creation also ended with an error message I exceeded the quota for that type. It seems to be running though, how to actually use it for a notebook?

It looks like you have an app running on that instance, so if you open a notebook and select the ml.g4dn.xlarge instance and the pytorch-1.12-gpu optimized image, you will be able to use this. You can also create more notebooks with different images using the same instance.

Related

Zeppelin interpreter: Not able to share interpreter accross notebooks

Recently I upgraded my zeppelin from 0.8.1 to 0.9.0-preview (Also upgraded spark from 2.2 to 3.0,1).
Here I am not able to execute notebooks parallelly(by same user or different user). First executed notebook submit job on spark keeps running on other hand all other notebooks shows as waiting.
Even after first notbook is successfully completed, other notbooks not able to execute.
I was able run multiple notebook simultaneously in previous version.
Setting in zeppelin intrpreter is
You get only one session when you share your session globally. That means every executed paragraph is queued and processed sequentially as shown in the picture below:
Depending on your working environment you should change your setting to per note or to per per user (in case of a multiuser environment) and scoped or isolated mode.
Below is an overview from the official documentation of the advantages and disadvantages of the shared, isolated, and scoped mode from a notebook perspective:

Weird ruby process in App Engine Flexible instance

I am connecting via ssh to one of an App Engine Flex instances with .Net Core application running on it and get this:
Where does that ruby process(with 24% cpu usage) come from? Is it some internal google service?
The running Ruby process is /usr/sbin/google-fluentd. This package contains the logger agent which is the basis of Stackdriver Logging and it is written in Ruby gem as explained in this document. All in all, the Ruby process is using the CPU because the application’s logging.
As an aside, I noticed that the screenshot you uploaded contains your account-id and project-id. I strongly suggest you to re-upload the picture without this information for security and privacy reasons.

Passing custom parameters to docker when running Flink on Mesos/Marathon

My team are trying set-up Apache Flink (v1.4) cluster on Mesos/Marathon. We are using the docker image provided by mesosphere. It works really well!
Because of a new requirement, the task managers have to launched with extend runtime privileges. We can easily enable this runtime privileges for the app manager via the Marathon web UI. However, we cannot find a way to enable the privileges for task managers.
In Apache Spark, we can set spark.mesos.executor.docker.parameters privileged=true in Spark's configuration file. Therefore, Spark can pass this parameter to docker run command. I am wondering if Apache Flink allow us to pass a custom parameter to docker run when launching task managers. If not, how can we start task managers with extended runtime privileges?
Thanks
There is a new parameter mesos.resourcemanager.tasks.container.docker.parameters introduced in this commit which will allow passing arbitrary parameters to Docker.
Unfortunately, this is not possible as of right now (or only for the framework scheduler as Tobi pointed out).
I went ahead and created a Jira for this feature so you can keep track/add details/contribute it yourself: https://issues.apache.org/jira/browse/FLINK-8490
You should be able to tweak the setting for the parameters in the ContainerInfo of https://github.com/mesoshq/flink-framework/blob/master/index.js to support this. I’ll eventually update the Flink version in the Docker image...

Azure Automation DSC - Permission and Module Issues

Are there any Azure Automation DSC gurus who can help with some guidance and know-how for pushing through a couple impasses I am currently encountering?
The task at hand is: Use Azure Automation Runbook to provision a VM. That VM should immediately be associated with a DSC configuration, which will adjust Windows features, settings, and install SQL Server according to a specific configuration. All tasks conducted need to be written in PowerShell and should require no manual input via Azure portal at any point.
At this time, the Runbook provisioning the VM is working perfectly. However, associating this new node with a DSC configuration is still a manual process, which also is working (with the exception of the next issue mentioned below). However, this process needs to be automated instead. How is this done? Via DSC resources as children of the VM resource in the ARM template?
Getting SQL Server installed is the next step. The xSQLServer DSC module seemed perfect for achieving this, but it currently has a bug in Azure Automation, which means that the xSQLServerSetup resource is not available, even when using older versions of xSQLServer. So, there appear to be two possible workarounds to this…
Workaround 1: Not use xSQLServer and just run a PS script that is local on the newly provisioned VM to install SQL Server using a command line installation using an INI file. The PS script to install SQL works, but only when run manually. When attempting to have DSC run this script, Azure is throwing an error that the script is not digitally signed. So, there appears to be a permissions scoping issue at play, and the DSC credential is not able to run the local PS script even though the local admin credential is being passed in. How does one get around this?
Workaround 2: Apparently, it is supposed to be possible to provision a VM, compile the DSC MOF local on that machine (with the full version of xSQLServer), and then push that registration back to Azure Automation. Though, it is unclear how exactly this would be done, as it appears to also require the execution of a local PS script, thus providing the same impasse as the first workaround. Is this perhaps via a Custom Script extension in the ARM template, or…?
I can see all of the parts in play, and I’ve found several helpful resources online that give breadcrumbs to the solution. But, the breadcrumbs are too far apart, and the proper way of wiring everything together is proving to be elusive. So, I’m here humbly asking for help and guidance in getting this worked out.
Any help would be greatly appreciated.
Thanks!
First of all that's a lot of questions instead of 1.
unless this is some kind of homework - there is no point in installing sql on a vm, there are a lot of vm + sql images in Azure and it would take 5 minutes instead of 60 to provision such a vm.
When attempting to have DSC run this script, Azure is throwing an error that the script is not digitally signed. - this means your script is not signed (not related to rights\permissions), look for execution policy, you need to set it to unrestricted before running your script (but you don't need to, because of the first point).
you compile mof or upload it and then you can "tie" a vm to that mof, it can be automated with powershell (both parts), there are a lot of guides on how to do that. Like this
As a general rule, use arm template to do the whole thing, again, there are lots of examples on how to achieve that (just browse this repo). Provisioning infrastructure with powershell (on azure) is not the best way of doing things.

BIRT and iServer, dev/qa/production environments

I'm trying to go about setting up my BIRT reports and the iServer they sit on such that the database the Data Sources connect to are determined by the environment. Our setup is that currently there is just one iServer instance and many environments running a tomcat webapp that hit it (this may be the problem...).
Essentially the ideal is that the report connects differently in these places:
Local developement, which is running a local tomcat instance of the application which talks to the iPortal/iServer. Local database, but should be able to easily change to other databases for debugging etc.
QA deploy, qa database
Production deploy, production database
I've seen two options for how to fix this:
First option is to bind the Data Source to a configuration file in resources somewhere. Problem here is that if you have only one iServer, its resources are local to the server it is on, and not where the webapp. So, if I understand it correctly, this does not provide the flexibility I'm looking for.
Second option is to pass in all the connection info as report parameters and get the application to determine the correct parameters to send in. This way the application could pull from a local configuration file. This option would work, but I'm weary of the security (or lack thereof) in passing around connection info/credentials.
Does anyone have a better option? Or have people just run local iServer instances for developement? I can see running an iServer for each environment may simplify this issue and allow the reports released to production to be updated and tested in a QA environment without disrupting production, so maybe that is the solution.
One possible approach would be to set each of the connection properties conditionally in the Property Binding section of the Edit Data Source dialog, based on the value of a hidden parameter indicating which environment is to be accessed.
An example of this approach can be found here.
You mention that you are looking for an option for development, including the possibility of a local iServer. I think this would be overkill. Do you Dev & initial testing in BIRT; you do not need an iServer to run the report. If you need resources on the iServer to run & test the report you can reference those through the Server explorer in BIRT Pro. Once you are ready to deploy, I would follow Mark's strategy above using property bindings on the data source itself. That is as close to a best practice as exists for this migration requirement as exists in BIRT.

Resources