Can we create a PipelineModel and run a batch transform at the local?
I don't see any document about this. I tried but get the error
sagemaker_session = LocalSession()
sagemaker_session.config = {'local': {'local_code': True}}
model_1 = PyTorchModel(...)
model_2 = PyTorchModel(...)
model = PipelineModel(models=[model_1, model_2])
transformer = model.transformer(...)
transformer.transform(...)
self.sagemaker_client.create_model(**create_model_request)
TypeError: create_model() missing 1 required positional argument: 'PrimaryContainer'
It's still a feature request:
https://github.com/aws/sagemaker-python-sdk/issues/1846
Related
I am playing with Karate to test one resource that accepts a date that cannot be in the past.
Scenario: Schedule one
Given path '/schedules'
And request read('today_at_19h30.json')
When method post
Then status 201
I have created some JSON files (mostly duplicates but with subtle changes) for each scenario. But I cannot seriously imagine changing the date in all of them each time I want to run my tests.
Using a date in the far future is not a good idea because there is some manual verification and that will force us to click (on next) too many times.
Is there a way to include a variable or an expression in a file?
Thanks
There are multiple ways to "clobber" JSON data in Karate. One way is to just use a JS expression. For example:
* def foo = { a: 1 }
* foo.a = 2
* match foo == { a: 2 }
For your specific use case, I suspect embedded expressions will be the more elegant way to do it. The great thing about embedded expressions is that they work in combination with the read() API.
For example where the contents of the file test.json is { "today": "#(today)" }
Background:
* def getToday =
"""
function() {
var SimpleDateFormat = Java.type('java.text.SimpleDateFormat');
var sdf = new SimpleDateFormat('yyyy/MM/dd');
var date = new java.util.Date();
return sdf.format(date);
}
"""
Scenario:
* def today = getToday()
* def foo = read('test.json')
* print foo
Which results in:
Running com.intuit.karate.junit4.dev.TestRunner
20:19:20.957 [main] INFO com.intuit.karate - [print] {
"today": "2020/01/22"
}
By the way if the getToday function has been defined, you can even do this: { "today": "#(getToday())" }. Which may give you more ideas.
I am using the Table API on Flink 1.4.0. I have some Table objects to be convert to a DataSet of type Row. The project was built using Maven and imported on IntelliJ. I have the following code and the IDE cannot resolve the method tableenv.toDataSet() method. Please help me out. Thank you.
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnvironment = TableEnvironment.getTableEnvironment(env);
...
tableEnvironment.registerTableSource("table1",csvSource);
Table table1 = tableEnvironment.scan("table1");
DataSet<Row> result = tableEnvironment.toDataSet(table1, Row.class);
The last statement causes an error
"Cannot resolve toDataSet() method"
You might not import the right BatchTableEnvironment.
Please check that you import org.apache.flink.table.api.java.BatchTableEnvironment instead of org.apache.flink.table.api.BatchTableEnvironment. The former is the common base class for the Java and Scala variants.
If you want to read a DataSet from a csv file, do it like following:
DataSet<YourType> csvInput = env.readCsvFile("hdfs:///the/CSV/file") ...
More on this: https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/batch/#data-sources
I read the Finatra getting started guide and I was able to write the HelloWorld Service and its feature test.
Currently my feature test looks like
server.httpPost(
path = "/hi",
postBody = """{"name": "Foo", "dob": 136190040000}""",
andExpect = Ok,
withBody = """{"msg":"Hello Foo. You are 15780 days old today"}""")
This works fine and my tests pass. However my requirement is that I extract the json returned by the server and then manually perform asserts on the object returned.
I changed my code to
val response = server.httpPost(
path = "/hi",
postBody = """{"name": "Abhishek", "dob": 136190040000}""",
andExpect = Ok,
withBody = """{"msg":"Hello Abhishek. You are 15780 days old today"}""")
val json = response.contentString
This also works and I can see the json returned in side the variable json.
My question is that if I have to deserialize this json into an object. Should I just pull in any json library like circe? and then deserialize the object?
or can I use the jackson framework which comes inside of Finatra.
In all examples I could find, I see that Finatra "automatically" handles the json serialization and deserialization. But in my case I want to perform this manually.
You can use the FinatraObjectMapper by calling (using your example) server.mapper. That wraps a Jackson ObjectMapper that you could use if you wanted to use the Jackson library without any of the Finatra add ons.
Or you can import your a different JSON library. If you are using SBT, you can restrict libraries to certain areas of your code, so if you wanted to use circe only in the test code, you could add the following to your build.sbt
"org.scalatest" %% "scalatest" % "2.2.6" % "test"
My question is similar to the question of janpeter. I study the ebook by Tiller and try to simulate the example 'Architecture Driven Approach' with OpenModelica and JModelica. I tried the minimal example 'BaseSystem' in OpenModelica and it works fine. But with JModelica version 1.14 I get errors in the compiling process and my script fail. My python script is:
import matplotlib.pyplot as plt
from pymodelica import compile_fmu
from pyfmi import load_fmu
# Variables: modelName, modelFile, extraLibPath
modelName = 'BaseSystem'
modelFile = 'BaseSystem.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures'
compilerOption = {'extra_lib_dirs':[extraLibPath]}
# Compile model
fmuName = compile_fmu( modelName, modelFile, compiler_options=compilerOption)
# Load model
model = load_fmu( fmuName)
# Simulate model
res = model.simulate( start_time=0.0, final_time=5.0)
# Extract interesting values
res_w = res['sensor.w']
res_y = res['setpoint.y']
tSim = res['time']
# Visualize results
fig = plt.figure(1)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(tSim, res_w, 'g-')
ax2.plot(tSim, res_y, 'b-')
ax1.set_xlabel('t (s)')
ax1.set_ylabel('w (???)', color='g')
ax2.set_ylabel('y (???)', color='b')
plt.title('BaseSystem')
plt.legend()
plt.grid(True)
plt.show()
My problem is how to compile and simulate a model that is part of a package?
I am not a jModelica user, but I think I see some confusion in your script. You wrote:
modelName = 'BaseSystem'
modelFile = 'BaseSystem.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures'
But that implies (to me) that the compiler should open the package stored at C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures. But the top-level package is ModelicaByExample and the model you want is Architectures.BaseSystem. So I think something like this is probably more appropriate:
modelName = 'Architectures.BaseSystem'
modelFile = 'package.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample'
The essential point here is that you should be opening ModelicaByExample (specifically, the package.mo file in the ModelicaByExample directory). That opens the ModelicaByExample package. You need to open this package because it is the top-level package. You can't load just a sub-package (which is what it looked like you were trying to do). Then, once you've got ModelicaByExample loaded, you can ask the compiler to specifically compile Architectures.BaseSystem.
I suspect OpenModelica was "helping" you by opening the top-level package anyway, even if you were asking it to open the sub-package.
But again, I don't know jModelica very well and I have definitely not tested any of these suggestions.
Thank you Michael Tiller. With your support I found the solution.
First, the modelName has to be full qualified. Second, as you mentioned, the extraLibPath should end at the top-level directory of the library ModelicaByExample. But then I got errors, that JModelica couldn't find components or declarations which are part of the Modelica Standard Library (MSL).
So I added the modelicaLibPath to the MSL, but the error messages remained the same. After many attempts, I launched the command line with administrator privileges and any errors were gone.
Here is the executable python script: BaseSystem.py
### Attention!
# The script and/or the command line must be
# started with administrator privileges
import matplotlib.pyplot as plt
from pymodelica import compile_fmu
from pyfmi import load_fmu
# Variables: modelName, modelFile, extraLibPath
modelName = 'Architectures.SensorComparison.Examples.BaseSystem'
modelFile = ''
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample'
modelicaLibPath = 'C:\OpenModelica1.9.2\lib\omlibrary\Modelica 3.2.1'
compileToPath = 'C:\Users\Tom\Desktop\Tiller2015a'
# Set the compiler options
compilerOptions = {'extra_lib_dirs':[modelicaLibPath, extraLibPath]}
# Compile model
fmuName = compile_fmu( modelName, modelFile, compiler_options=compilerOptions, compile_to=compileToPath)
# Load model
model = load_fmu( fmuName)
# Simulate model
res = model.simulate( start_time=0.0, final_time=5.0)
# Extract interesting values
res_w = res['sensor.w']
res_y = res['setpoint.y']
tSim = res['time']
# Visualize results
fig = plt.figure(1)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(tSim, res_w, 'g-')
ax2.plot(tSim, res_y, 'b-')
ax1.set_xlabel('t (s)')
ax1.set_ylabel('sensor.w (rad/s)', color='g')
ax2.set_ylabel('setpoint.y (rad/s)', color='b')
plt.title('BaseSystem')
plt.legend()
plt.grid(True)
plt.show()
I'm not sure if I'm using the NDB Search API as it's meant to be used. I've read through the documentation, but I think I'm either missing something or lacking in my python skills. Can anyone confirm/improve this progression of using search?
# build the query object
query_options = search.QueryOptions(limit=results_per_page, offset=number_to_offset)
query_object = search.Query(query_string=escaped_param, options=query_options)
# searchResults object
video_search_results = videos.INDEX.search(query_object)
# ScoredDocuments list
video_search_docs = video_search_results.results
# doc_ids
video_ids = [x.doc_id for x in video_search_docs]
# entities
video_entities = [Video.GetById(x) for x in video_ids]
I might personally write this something more like:
# build the query object
query_options = search.QueryOptions(limit=results_per_page, offset=number_to_offset)
query_object = search.Query(query_string=escaped_param, options=query_options)
# do the search
video_search = search.Index(name=VIDEO_INDEX).search(query_object)
# list of matching video keys
video_keys = [ndb.Key(Video, x.doc_id) for x in video_search.results]
# get video entities
video_entities = ndb.get_multi(video_keys)
Using ndb.get_multi will be more efficient. You can use AppStats to verify that. You might also look into the async equivalent if you have other processing you can do while the RPCs are outstanding.
I am not sure what the Video.GetById method actually is, but I would suggest you see the ndb documentation on Model.get_by_id.
It seems like everything is OK with your code. Does something go wrong with it?