Im fairly new into programming, i was trying to launch my test but i got : Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: No scenario set up
even if i got the scenario set up (which is confusing)
i'm pretty sure is something obvious that newbie like me can't figure out unfortunately.
i tried invalidating caches, and reloading the project.
im using intelij + maven
package simulations
import io.gatling.core.Predef._
import io.gatling.core.structure.{ChainBuilder, ScenarioBuilder}
import io.gatling.http.Predef._
import io.gatling.http.protocol.HttpProtocolBuilder
class GatlingDemoStore extends Simulation {
val domain = "demostore.gatling.io"
val httpProtocol: HttpProtocolBuilder = http
.baseUrl("http://" + domain)
object login {
def userlogin: ChainBuilder = {
exec(http("Login User")
.post("/login")
.formParam("_csrf", "${csrfValue}")
.formParam("username", "user1")
.formParam("password", "pass"))
}
}
object CmsPages {
def homepage: ChainBuilder = {
exec(http("Load Home Page")
.get("/")
.check(status.is(200))
.check(regex("<title>Gatling Demo-Store</title>").exists)
.check(css("#_csrf", "content").saveAs("csrfValue")))
}
def aboutus: ChainBuilder = {
exec(http("Load Home Page")
.get("/about-us")
.check(status.is(200))
.check(substring("About Us")))
}
def categories: ChainBuilder = {
exec(http("Load Categories Page")
.get("/category/all")
.check(status.is(200)))
.pause(10)
}
def productpage: ChainBuilder = {
exec(http("Load Product Page")
.get("/product/black-and-red-glasses")
.check(status.is(200)))
.pause(15)
}
def addtocart: ChainBuilder = {
exec(http("Add Product to Cart")
.get("/cart/add/19"))
}
def viewcart: ChainBuilder = {
exec(http("View Cart")
.get("/cart/view"))
}
def checkout: ChainBuilder = {
exec(http("Checkout")
.get("/cart/checkout"))
}
val User: ScenarioBuilder = scenario("DemoStore Simulation")
.exec(CmsPages.homepage)
.pause(5)
.exec(CmsPages.aboutus)
.pause(5)
.exec(CmsPages.categories)
.pause(20)
.exec(CmsPages.productpage)
.pause(5)
.exec(CmsPages.addtocart)
.pause(2)
.exec(login.userlogin)
setUp(
User.inject(atOnceUsers(1))
).protocols(httpProtocol)
}
}
I've reformatted your code so your error is more obvious: you're calling setUp not in the body of the GatlingDemoStore class but in the body of the CmsPages object, which is never loaded (in Scala, objects are lazily loaded and here you never call it).
Move setUp in the body of the GatlingDemoStore class.
Important note: as you're new to programming, you should probably go with Java (supported since Gatling 3.7) instead of Scala.
thank you very much for your help, greatly appreciated.
seems like a rookie mistake..
thats a great idea actually, ima do some research on how i can do that.
Related
I was wondering how to make my app save data after restart? (The user can delete task and add new task to list, as well as check the box that the task is done. I want the app to save this data so when the user exists the app it will display all the tasks that he left the app with)
I was reading on google for few hours now, I got to
[1]: https://flutter.dev/docs/cookbook/persistence/reading-writing-files
This link as someone recommended on a similar post. But after reading it through I am a bit confused about where to start with my app.
Including some of my code and if you could help me I would really appreciate it as after hours of reading and watching tutorials I am still quite unsure where to start or which way of doing this is best.
My main.dart is this
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
#override
Widget build(BuildContext context) {
return ChangeNotifierProvider(
create: (context) =>
TaskData(), //changing builder: to create: fixed the errors i been having
child: MaterialApp(
home: TasksScreen(),
),
);
}
}
class TaskData extends ChangeNotifier {
List<Task> _tasks = [
Task(name: "Sample task 1"),
Task(name: "Sample task 2"),
Task(name: "Sample task 3"),
];
UnmodifiableListView<Task> get tasks {
return UnmodifiableListView(_tasks);
}
int get taskCount {
return _tasks.length;
}
void addTask(String newTaskTitle) {
final task = Task(name: newTaskTitle);
_tasks.add(task);
notifyListeners();
}
void updateTask(Task task) {
task.toggleDone();
notifyListeners();
}
void deleteTask(Task task) {
_tasks.remove(task);
notifyListeners();
}
Thank you so much!
The basic method is using the device local storage.
1 - Add the shared_preferences in your pubspec.yaml
2 - Create a class to write and read data :
import 'dart:convert';
import 'package:shared_preferences/shared_preferences.dart';
class StoreData {
StoreData._privateConstructor();
static final StoreData instance = StoreData._privateConstructor();
Future<void> saveString(String key, String value) async {
try{
SharedPreferences pref = await SharedPreferences.getInstance();
final encodedValue = base64.encode(utf8.encode(value));
pref.setString(key, encodedValue);
} catch (e){
print('saveString ${e.toString()}');
}
}
Future<String> getString(String key) async {
SharedPreferences pref = await SharedPreferences.getInstance();
final value = pref.getString(key) == null ? '' : pref.getString(key);
if (value.length > 0) {
final decodedValue = utf8.decode(base64.decode(value));
return decodedValue.toString();
}
return '';
}
Future<bool> remove(String key) async {
SharedPreferences pref = await SharedPreferences.getInstance();
return pref.remove(key);
}
}
3 Use :
Save Data:
StoreData.instance.saveString('name', 'sergio');
Retrieve Data:
final String storedName = await StoreData.instance.getString('name');
print('The name is $storedName');
We have many other methods, like use a SQlite, NoSql or a Database in back-end, but the local storage is the most basic case
What you need is a database or a document based data storage. You can store data in a local sqlite db, using sqflite plugin. Or you can store in a JSON asset file.
You can also use a server or cloud service. Firebase is pretty well integrated with flutter, but AWS and Azure are also great.
You can write the data in a text file in the asset, but that would very complicated.
I am implementing an object-detection web application using React and Tensorflow JS. I converted my model to a tensorflow JS model, such that I can load it into my React application. I want to load the model using a simple HTTP endpoint, which is a Flask server currently hosting on my local machine. The Flask main file looks as follows:
from flask import Flask
from flask_cors import CORS, cross_origin
import os
app = Flask(__name__)
cors = CORS(app)
#app.route('/')
def hello_world():
return 'Hello, World!'
#app.route('/model', methods=['GET'])
def get_modeljson():
"""
Get the model.json file and return it's contents.
"""
current_dir = os.getcwd()
file_path = os.path.join(current_dir, "models", "model.json")
with open(file_path, "r") as f:
return f.read()
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", threaded=True)
I have written a function in my React application that loads the graph model using the endpoint /model that is defined in the code above. The React function looks as follows:
import {useEffect, useState} from 'react';
import * as tf from '#tensorflow/tfjs';
import {loadGraphModel} from '#tensorflow/tfjs-converter';
function Model(props) {
const [model, setModel] = useState();
async function loadModel() {
try {
const model_url = "http://127.0.0.1:5000/model";
const result = await fetch(model_url);
const result_json = await result.json();
const model = await loadGraphModel(result_json);
console.log('model loaded...')
setModel(model);
console.log("Model correctly loaded");
} catch (err) {
console.log(err);
console.log("failed load model");
}
}
useEffect(() => {
tf.ready().then(() => {
loadModel();
});
}, []);
async function predictFunction() {
// use model to make predictions
}
return (
<Button onClick={() => {
predictFunction();
}}
/>
);
}
export default Model;
The FLASK API returns correctly the model.json file, however loadGraphModel returns the following error:
TypeError: url.startsWith is not a function
at indexedDBRouter (indexed_db.ts:215)
at router_registry.ts:95
at Array.forEach (<anonymous>)
at Function.getHandlers (router_registry.ts:94)
at Function.getLoadHandlers (router_registry.ts:84)
at Module.getLoadHandlers (router_registry.ts:110)
at GraphModel.findIOHandler (graph_model.ts:107)
at GraphModel.load (graph_model.ts:126)
at loadGraphModel (graph_model.ts:440)
at loadModel (Model.js:16)
I can not find any documentation about url.startsWith. Who sees what is going wrong here?
Going through the code I see a major issue with it, where you are trying to basically send a model.json from the backend to the frontend and then load the model from that model.json and perform inference on it. It would work but it is not efficient at all. Imagine having to do this a couple hundred times and I know the model.json file can be big in size. Instead there are two routes that you could go with:
Host the model on the backend, send the data to the backend through a POST request and then make predictions on the data from the request.
Use the model on the frontend and then make predictions on the input data from there.
There are some errors in the code which are causing the error but this is the issue that you need to fix first. If you could give me more information about the inputs you are working with I could draft up a workable solution.
I've got a Flink KeyedCoProcessFunction that registers Processing Time Timers in a larger Flink stream job, and I'm trying to create unit tests for the entire job using the Flink MiniCluster. But I can't get the onTimer() call back in the KeyedCoProcessFunction to trigger.
Has anyone gotten this to work? Did it require any special configuration?
Switching to Event Time works fine, so I'm wondering if this just doesn't work with the Flink MiniCluster or is there something wrong with my implementation.
I wrote a simple test in Scala to see if I could get this to work.
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.streaming.api.functions.KeyedProcessFunction
import org.apache.flink.streaming.api.functions.source.{ParallelSourceFunction, SourceFunction}
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.test.streaming.runtime.util.TestListResultSink
import org.apache.flink.test.util.MiniClusterWithClientResource
import org.apache.flink.util.Collector
import org.scalatest.BeforeAndAfter
import org.scalatest.flatspec.AnyFlatSpec
import org.slf4j.LoggerFactory
class TimerTest extends AnyFlatSpec with BeforeAndAfter {
private val SlotsPerTaskMgr = 1
val flinkCluster = new MiniClusterWithClientResource(new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(SlotsPerTaskMgr)
.setNumberTaskManagers(1)
.build)
before {
flinkCluster.before()
}
after {
flinkCluster.after()
}
"MiniCluster" should "trigger onTimer" in {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
implicit val longTypeInfo: TypeInformation[Long] = TypeInformation.of(classOf[Long])
val sink = new TestListResultSink[Long]
env.addSource(new MyLongSource(100))
.keyBy(v => v)
.process(new MyProccesor())
.addSink(sink)
env.execute()
println("Received " + sink.getResult.size() + " output records.")
}
}
class MyProccesor extends KeyedProcessFunction[Long, Long, Long] {
private val log = LoggerFactory.getLogger(this.getClass)
override def processElement(
value: Long,
ctx: KeyedProcessFunction[Long, Long, Long]#Context,
out: Collector[Long]): Unit = {
log.info("Received {} at {}", value, ctx.timerService().currentProcessingTime())
if (value % 10 == 0) {
log.info("Scheduling processing timer for {}", ctx.timerService().currentProcessingTime() + 10)
ctx.timerService().registerProcessingTimeTimer(ctx.timerService().currentProcessingTime() + 10)
}
}
override def onTimer(
timestamp: Long,
ctx: KeyedProcessFunction[Long, Long, Long]#OnTimerContext,
out: Collector[Long]): Unit = {
log.info("Received onTimer at {}", timestamp)
out.collect(timestamp)
}
}
class MyLongSource(n:Int) extends ParallelSourceFunction[Long] {
#volatile private var stop = false
override def run(ctx: SourceFunction.SourceContext[Long]): Unit = {
for(i <- 1 to n) {
if(stop) return;
println("Sending " + i)
ctx.collect(i)
}
Thread.sleep(1000)
}
override def cancel(): Unit = {
stop = true
}
}
I was finally able to get some consistent results by adding a Thread.sleep(1000) at the end of the source run() method. Seems like once the source exits, messages get processed and then everything is shut down even if there are pending timers.
When a Flink job shuts down, any pending processing time timers are simply ignored. They never fire.
For what it's worth, there's some ongoing discussion on the Flink dev mailing list about offering an option to trigger all pending processing time timers. See http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DISCUSS-FLIP-134-DataStream-Semantics-for-Bounded-Input-td37365.html#a37558.
So I'm having a little trouble when updating a resource in Grails. So here is my controller, in which I'm overriding some methods
#Override
protected saveResource(resource) {
if (request.post) {
permissionsService.addPermission(resource.id, springSecurityService.getCurrentUser(), 'Outing')
}
resource.save(flush: true)
}
#Override
protected updateResource(resource) {
if (permissionsService.isAdmin()) {
println 'doesnt print'
saveResource(resource)
} else if (permissionsService.canWrite('Outing', params.id, springSecurityService.getCurrentUser())) {
println 'doesnt print'
saveResource(resource)
} else {
println 'prints to console'
response.status = 404
}
}
This is what the top of the controller looks like
import grails.converters.JSON
import grails.rest.*
class OutingController extends RestfulController {
static responseFormats = ['json']
OutingController() {
super(Outing)
}
...............
}
My problem is that when I perform a PUT, for some reason the resource is still updated and saved. I can't figure out why because it never hits the saveResource or any other save function to my knowledge. I would like it to return with the status 404, but I could use some help with that. I'm basically adding permissions to the restful controller. Thanks for any help
Here is the RestfulController if you'd like to look. https://github.com/grails/grails-core/blob/master/grails-plugin-rest/src/main/groovy/grails/rest/RestfulController.groovy
I'm new to protractor, I want to write a test to see that there are no anchors with urls giving 404 errors.
I've seen this How to test html links with protractor?, but is for one determined link only, I want to do it for all links in the page.
The test should pass for the http status 200, as stated here How to use protractor to get the response status code and response text?
I have two questions:
Does this test makes sense in protractor?
Is it possible to test this? If so, how?
I wanted to implement this as a page object so I could use a simple one line expect statement for each pages spec file. In afterthought I think this could have been achieved in a much simpler manner using an API testing framework such as cheerio.js but here is how to implement it using protractor and jasmine (using ES2015 syntax so update node to the current version) ! Please remember to install the request, bluebird and request-promise npm packages.
PageObject.js
crawlLinks(){
const request = require('request');
const Promise = require('bluebird');
const rp = require('request-promise');
return $$('a').then(function(elems){
return Promise.map(elems, elem => {
return elem.getAttribute("href").then(function(url){
if(url){
var options = {
method: 'GET',
uri: url,
resolveWithFullResponse: true
};
return rp(options).then(function(response){
console.log('The response code for ' + url + ' is ' + response.statusCode);
return response.statusCode === 200;
});
}
});
}).then((allCodes) => {
console.log(allCodes);
return Promise.resolve(allCodes);
});
});
}
Test
it("should not have broken links", function(){
expect(pageObject.crawlLinks()).not.toContain(false);
});
I think its definitely doable and would make sense to do it if the scope is limited since this is not a typical UI test that selenium-webdriver is used for. You could do something like, find all links, get underneath url and fire a GET request using a module like request. Here is a pseudo code.
var request = require('request');
var assert = require('assert');
element.all(by.tagName('a')).then(function(link) {
var url = link.getAttribute('href');
if(url) {
request(url, function (error, response, body) {
assert.equal(response.statusCode, 200, 'Status code is not OK');
});
}
});
404 errors appear in the browser console (at least in chrome they do) and you can access that from protractor
browser.manage().logs().get('browser').then(function(browserLogs) {
browserLogs.forEach(function(log){
expect(log).toBeFalsy();
});
});
The code above will cause all console messages to be considered a test failure, you may adapt it to your needs. You can put similar code in afterAll on all your tests.
Here I wrote a java demo which may meets your requirement. Also I am not familiar with protractor but hope this can help
package com.selenium.webdriver.test;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.methods.GetMethod;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.htmlunit.HtmlUnitDriver;
public class Traverse {
private WebDriver driver;
private String baseUrl;
private Map<String, String> tMap;
public Traverse(String url) {
driver = new HtmlUnitDriver();
baseUrl = url;
tMap = new HashMap<String,String>();
}
//get status code.
public int getRespStatusCodeByGet(String url) throws HttpException, IOException {
GetMethod method = new GetMethod(url);
HttpClient client = new HttpClient();
client.executeMethod(method);
return method.getStatusCode();
}
//single page traversal
public boolean search(String url) throws HttpException, IOException {
if(getRespStatusCodeByGet(url) != 200) {
System.out.println("Bad page " + url);
return false;
}
driver.get(url);
List<WebElement> elements = driver.findElements(By.tagName("a"));
for(int i=0; i<elements.size(); i++) {
String cUrl = elements.get(i).getAttribute("href");
String cText = elements.get(i).getText();
if(cUrl != null && cUrl.startsWith("http") && !tMap.containsKey(cText)) {
tMap.put(cText, cUrl);
System.out.println(cUrl);
search(cUrl);
}
}
return true;
}
//client
public static void main(String[] args) throws HttpException, IOException {
Traverse t = new Traverse("http://www.oktest.me/");
t.search(t.baseUrl);
}
}
Check the bad page you can get what you want.