In an akka-http service, how does one cache some information, per client session? This is not quite obvious in the docs. I would for example like to create an actor for each connection.
Where should I create the actor, and how do I get reference to it from inside my stages?
My service is bound something like this:
val serverSource: Source[Http.IncomingConnection, Future[Http.ServerBinding]] =
Http().bind(interface = bindAddress, port = bindPort)
val bindingFuture: Future[Http.ServerBinding] =
serverSource
.to(Sink.foreach { connection =>
connection.handleWithSyncHandler (requestHandler)
// seems like I should set up some session state storage here,
// such as my actor
})
.run()
...
and later on:
val packetProcessor: Flow[A, B, Unit] = Flow[A]
.map {
case Something =>
// can i use the actor here, or access my session state?
}
I suspect I'm probably misinterpreting the whole paradigm in trying to make this fit. I can't tell if there is anything built in or how much I need to implement manually.
I have found Agent to be a very convenient mechanism for concurrent caching.
Say, for example, you want to keep a running Set of all the remote addresses that you have been connected to. You can setup an Agent to store the values and a Flow to write to the cache:
import scala.concurrent.ExecutionContext.Implicits.global
import akka.agent.Agent
import scala.collection.immutable
val addressCache = Agent(immutable.Set.empty[java.net.InetSocketAddress])
import akka.stream.scaladsl.Flow
val cacheAddressFlow = Flow[IncomingConnection] map { conn =>
addressCache send (_ + conn.remoteAddress) //updates the cache
conn //forwards the connection to the rest of the stream
}
This Flow can then be made part of your Stream:
val bindingFuture: Future[Http.ServerBinding] =
serverSource.via(cacheAddressFlow)
.to(Sink.foreach { connection =>
connection.handleWithSyncHandler (requestHandler)
})
.run()
You can then "query" the cache completely outside of the binding logic:
def somewhereElseInTheCode = {
val currentAddressSet = addressCache.get
println(s"address count so far: ${currentAddressSet.size}")
}
If your goal is to send all IncomingConnection values to an Actor for processing then this can be accomplished with Sink.actorRef:
object ConnectionStreamTerminated
class ConnectionActor extends Actor {
override def receive = {
case conn : IncomingConnection => ???
case ConnectionStreamTerminated => ???
}
}
val actorRef = actorSystem actorOf Props[ConnectionActor]
val actorSink =
Sink.actorRef[IncomingConnection](actorRef, ConnectionStreamTerminated)
val bindingFuture: Future[Http.ServerBinding] =
serverSource.runWith(actorSink)
For the reason that the suggested Agents have been deprecated. I would suggests to use akka-http-session. It makes sure session data is secure and cannot be tampered with.
Related
I have a Alexa hosted skill. I am trying to find a way to access persistenceAdapter.S3PersistenceAdapter() from some other places ( e.g. angular 2.x). Can this be done, or I have to use other database to replace S3? If that is the case, which database is recommended?
I use some sample code to access S3 from alexa skill. I have no idea how attributesManager work, just copy and paste.
.withPersistenceAdapter(
new persistenceAdapter.S3PersistenceAdapter({bucketName:process.env.S3_PERSISTENCE_BUCKET})
)
and
const attributesManager = handlerInput.attributesManager;
const sessionAttributes = await attributesManager.getPersistentAttributes() || {};
const temperature = sessionAttributes.hasOwnProperty('temperature') ? sessionAttributes.temperature : 0;
S3 isn't a database - S3 is object storage. If a database is what you need, you could use DynamoDB instead. It sounds like it is - you wouldn't normally use S3 for this.
Anyway, you'll not be able to use the ASK SDK in an Angular or other (non Alexa skill) project. But, you could connect to S3 (or DynamoDB) using the AWS SDK.
https://github.com/aws/aws-sdk-js
You have 3x types of attributes for Alexa - request, session and persistent. I noticed your variable is named sessionAttributes but you're doing getPersistentAttributes.
Here's an example of how you'd use the withPersistentAdapter - https://www.talkingtocomputers.com/alexa-skills-kit-ask-sdk-v2#data-persistence
But here's an example if you use DynamoDB. It's simpler IMO:
module.exports.handler = Alexa.SkillBuilders.standard()
.addRequestHandlers(/* your handlers */)
.withTableName(/* your table name (string) */)
.withDynamoDbClient()
.lambda()
}
Then you could do something like (in an async function):
const att = await attributesManager.getPersistentAttributes()
const temperature = att.temperature ? att.temperature : 0
But of course, you need to save the attribute there first, if you want to access it. For example (in an async function):
const att = await attributesManager.getPersistentAttributes()
await attributesManager.setPersistentAttributes( { ...att, temperature: 10 }) // set the value
await attributesManager.savePersistentAttributes() // save it
You can use S3 buckets the syntax looks like this, they do have get,set and save like the persistence attributes as in the example above and more on this you can go here the Amazon Docs at the bottom.
const { S3PersistenceAdapter } = require('ask-sdk-s3-persistence-adapter');
const S3PersistenceAdapter = new S3PersistenceAdapter({ bucketName : 'FooBucket' })
I'm trying to create a Gatling scenario which requires switching the protocol to a different host during the test. The user journey is
https://example.com/page1
https://example.com/page2
https://accounts.example.com/signin
https://example.com/page3
so as part of a single scenario, I need to ether switch the protocol defined in the scenario set up, or switch the baseUrl defined on the protocol but I can't figure out how to do that.
A basic scenario might look like
package protocolexample
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class Example extends Simulation {
val exampleHttp = http.baseURL("https://example.com/")
val exampleAccountsHttp = http.baseURL("https://accounts.example.com/")
val scn = scenario("Signin")
.exec(
http("Page 1").get("/page1")
)
.exec(
http("Page 2").get("/page2")
)
.exec(
// This needs to be done against accounts.example.com
http("Signin").get("/signin")
)
.exec(
// Back to example.com
http("Page 3").get("/page3")
)
setUp(
scn.inject(
atOnceUsers(3)
).protocols(exampleHttp)
)
}
I just need to figure out how to ether switch the host or protocol for the 3rd step. I know I can create multiple scenarios but this needs to be a single user flow across multiple hosts.
I've tried directly using the other protocol
exec(
// This needs to be done against accounts.example.com
exampleAccountsHttp("Signin").get("/signin")
)
which results in
protocolexample/example.scala:19: type mismatch;
found : String("Signin")
required: io.gatling.core.session.Session
exampleAccountsHttp("Signin").get("/signin")
and also changing the base URL on the request
exec(
// This needs to be done against accounts.example.com
http("Signin").baseUrl("https://accounts.example.com/").get("/signin")
)
which results in
protocolexample/example.scala:19: value baseUrl is not a member of io.gatling.http.request.builder.Http
You can use an absolute URI (inclusive protocol) as a parameter for Http.get, Http.post and so on.
class Example extends Simulation {
val exampleHttp = http.baseURL("https://example.com/")
val scn = scenario("Signin")
.exec(http("Page 1").get("/page1"))
.exec(http("Page 2").get("/page2"))
.exec(http("Signin").get("https://accounts.example.com/signin"))
.exec(http("Page 3").get("/page3"))
setUp(scn.inject(atOnceUsers(3))
.protocols(exampleHttp))
}
see: https://gatling.io/docs/current/cheat-sheet/#http-protocol-urls-baseUrl
baseURL: Sets the base URL of all relative URLs of the scenario on which the configuration is applied.
I am currently working in gatling.
SOLUTION:
=> WHAT WE USE IF WE HAVE ONE HTTPBASE:
var httpConf1= http.baseUrl("https://s1.com")
=>FOR MULTIPLE HTTPBASE:
var httpConf1=http.**baseUrls**("https://s1.com","https://s2.com","https://s3.com")
You can use baseUrls function which takes a list of httpBase urls.
ANOTHER METHOD:
Reading from csv file all httpbase and storing it in list buffer in scala language and then converting it into list while passing it in http.baseUrls
var listOfTenants:ListBuffer[String] = new ListBuffer[String] //buffer
var httpConf1=http.**baseUrls**(listOfTenants.toList) //converting buffer to list.
Hope it helps!.
Thank you.😊
Assume I have set up an arbitrarily complex Flow[HttpRequest, HttpResponse, Unit].
I can already use said flow to handle incoming requests with
Http().bindAndHandle(flow, "0.0.0.0", 8080)
Now I would like to add logging, leveraging some existing directive, like logRequestResult("my-service"){...}
Is there a way to combine this directive with my flow? I guess I am looking for another directive, something along the lines of
def completeWithFlow(flow: Flow): Route
Is this possible at all?
N.B.: logRequestResult is an example, my question applies to any Directive one might find useful.
Turns out one (and maybe the only) way is to wire and materialize a new flow, and feed the extracted request to it. Example below
val myFlow: Flow[HttpRequest, HttpResponse, NotUsed] = ???
val route =
get {
logRequestResult("my-service") {
extract(_.request) { req ⇒
val futureResponse = Source.single(req).via(myFlow).runWith(Sink.head)
complete(futureResponse)
}
}
}
Http().bindAndHandle(route, "127.0.0.1", 9000)
http://doc.akka.io/docs/akka/2.4.2/scala/http/routing-dsl/overview.html
Are you looking for route2HandlerFlow or Route.handlerFlow ?
I believe Route.handlerFlow will work based on implicits.
eg /
val serverBinding = Http().bindAndHandle(interface = "0.0.0.0", port = 8080,
handler = route2HandlerFlow(mainFlow()))
I am making a variation of the Client Server Example that just returns JSON strings using the Redstone framework. It has 3 routes:
/ => get the whole list or names
/add/name => add "name" to the list and gets the list
/remove/name => removes "name" from the list and gets the list
When I test locally everything works fine, however, when I deploy to App Engine I get an error when trying to add an element to the gcloud db. The error is
Exception: Tried to insert 1 entities, but response seems to indicate
we inserted 0 entities.
package:appengine/src/api_impl/raw_datastore_v3_impl.dart 416:11
DatastoreV3RpcImpl.commit. dart:isolate
_RawReceivePortImpl._handleMessage
You can test the error live at this URL http://web3.arista-dev.appspot.com/add/my-name
Remove doesn't seem to work either but yields no error. Here is my code:
import 'dart:io';
import 'dart:async';
import 'package:shelf/shelf.dart' as shelf;
import 'package:redstone/server.dart' as app;
import 'package:restonetest/model.dart';
import 'package:gcloud/db.dart';
import 'package:appengine/appengine.dart';
Key get itemsRoot => context.services.db.emptyKey.append (ItemRoot, id: 1);
DatastoreDB db = context.services.db;
Future<List<Item>> queryItems ()
{
var query = context.services.db.query (Item, ancestorKey: itemsRoot)
..order ('name');
return query.run ().toList ();
}
Future<List<Item>> addItemToDB (Item item)
{
return db.query(Item, ancestorKey: itemsRoot).run()
.any((i) => i.name == item.name)
.then((exists)
{
return ! exists ? db.commit(inserts: [item]) : false;
});
}
#app.Route("/")
helloWorld() => queryItems();
#app.Route('/add/:name')
addItem (String name)
{
return addItemToDB (new Item.create (name, itemsRoot)).then ((_)
{
print (name);
return helloWorld();
});
}
#app.Route('/delete/:name')
deleteItem (String name)
{
var query = db.query (Item, ancestorKey: itemsRoot)..filter('name =', name);
return query.run().toList().then((list)
{
var toDelete = list.map((i) => i.key).toList();
return db.commit(deletes: toDelete);
})
.then((_) => helloWorld());
}
main() {
app.setupConsoleLog();
app.setUp();
runAppEngine(app.handleRequest);
//app.start();
}
At the moment, package:appengine only allows one to call API services inside a request handler:
Each request handler invocation will get a new set of services. This allows package:appengine to give each request handler e.g. a different logging service instance. This allows all logging API calls to be grouped by request.
The way this is achieved in dart / package:appengine is by using Zones. For every incoming request, package:appengine makes a new Zone with API services and calls the request handler inside that. The handler can then use 'context.services.' to make API calls.
So the issue in the program posted above, is that the DatastoreDB service gets cached from the first request (global fields are lazily initialized) and may no longer work for subsequent requests.
Changing
DatastoreDB db = context.services.db;
to
DatastoreDB get db => context.services.db;
should fix the problem, since the services object will be re-fetched every time from the request handler Zone.
This being said:
a) The error is swallowed reported is misleading and will be fixed in package:appengine
b) In the near future, we will allow background tasks / tasks making API calls outside of a request handler. This is missing at the moment, but will be implemented.
I hope this helps.
I am developing a small REST app in slim framework. In that, users password is send as encrypted in the request body as xml or json. I want to de-crypt that password in a callable function and update the request body so that in the actual call back function we can validate the password without de-cryptng. I want to do those steps as follows:
$decrypt = function (\Slim\Route $route) use ($app) {
// Decrypt password and update the request body
};
$update = function() use ($app) {
$body = $app->request()->getBody();
$arr = convert($body);
$consumer = new Consumer($arr);
if ($consumer->validate()) {
$consumer->save();
$app->response()->status(201);
} else {
.....
}
}
$app->put('/:consumer_id', $decrypt, $update);
We can modify the body like following way:
$env = $app->environment;
$env['slim.input_original'] = $env['slim.input'];
$env['slim.input'] = 'your modified content here';
Courtsey: ContentTypes middleware
You say you want decrypt the password and update the request body. If you're encrypt the password at client side, i would rather decrypt the password in a server side layer like API service (or something that consume the business layers like a controller in mvc).
I do believe that this decryption process should belong to your application instead of doing it outside before consuming your code. I don't know how you encrypt but if you use server side programming to generate a new hash in those requests, for me that's even a better reason to do it inside the library.
That's how i handle this type of tasks, i try to use only the frameworks for consuming libraries and not handling any logic.
However if you want to do this, you could transform the request body and save it in a new location for services that need to decrypt the password.
I use Middleware for almost every code i need to write specifically to Slim layers. I only passe functions consuming classes that act as API layers and are abstracted from Slim. For your case, use a Middleware to keep this logic in his own place.
class DecriptPasswordRequest extends \Slim\Middleware
{
public function call()
{
$decriptedRoutes = array('login', 'credentials');
$app=$this->app;
$container = $app->container;
$currentRoute = $app->router()->getCurrentRoute();
if ($app->request->getmethod() == 'POST' && in_array($currentRoute, $decriptedRoutes){
$body = $app->request->post();
if (!isset($body['password'])){
throw new Exception('Password missing');
}
$provider = new ClassThatDecryptPassword();
$body['password'] = $provider->decrypt($body['password']);
}
$container['bodydecripted'] = $body;
$this->next->call();
}
}