securing webgl + react app -> database connection - reactjs

So, i thought if my webgl game runs on a particular ip / domain, i can ensure that my api only accepts the data from that particular domain / ip. But that can be hacked by simply add vhost and mimicking to have the data sent over via that faked vhost.
Alternate is to add jwt tokens to unity (webgl) but then again, the secret-key for jwt would be easy to take out from the client-builds (webgl). I'm unable to figure out a secure architecture to save the games data, for e.g., to keep on a leaderboard or create a reward system.
my flow is:
1.) user logins webgl via discord
2.) play the game in webgl (unity) over a particular domain
3.) react app collects the data transmitted over from unity (webgl)
4.) save that data to the database
the insecurity is at 2. (where anyone can modify the game/run a clone -- i can fix it by ensuring that this build is running on my server / via ip and domain but i know that can be mimicked).
so there has to be some sort of handshake between 2 and 3 to validate if the source of the data is authentic.

Related

How to create an online-offline application using servicestack

I'm trying to figure out how to create an offline / online approch to use within a huge application.
Right now, each part of the application has its own model and datalayer, who directly read / write data from / to SQL. My boss is asking me to create a kind of buffer that, in case of connectivity failure, might be used to store data until the connection to SQL return active.
What I'm trying to create is something like this: move all datalayers into a servicestack service. Each "GET" method should query the database and store the result into a cache to be reused once the connection to SQL is not available. Each "POST" and "PUT" method must execute their actions or store the request into a cache if the connection fail. this cache must be cleared once the connection to SQL is restored.
How can I achieve this? Mine is a WPF application running on Windows 10.
Best regards
Enrico
Maintaining caches on the server is not going to help create an offline Application given the client wouldn't have access to the server in order to retrieve those caches. What you'd need instead is to maintain state on the client so in the event that network access is lost the client is loading from its own local caches.
Architecturally this is easiest achieved with a Web App using a Single Page App framework like Vue (+ Vuex) or React (+ Redux or MobX). The ServiceStack TechStacks and Gistlyn Apps are good (well documented) examples of this where they store client state in a Vuex store (for TechStacks created in Vue) or Redux Store (for Gistlyn created in React), or the Old TechStacks (created with AngularJS).
For good examples of this checkout Gistlyn's snapshots feature where the entire client state can be restored from a single serialized JSON object or approach used the Real Time Network Traveler example where an initial client state and delta's can be serialized across the network to enable real-time remote control of multiple connected clients.
They weren't developed with offline in mind, but their architecture naturally leads to being offline capable, courtesy of each page being first loaded from its local store then it fires off a Request to update its local cache which thanks to the reactivity of JS SPA fx's, the page is automatically updated with the latest version of the server.
Messaging APIs
HTTP has synchronous tight coupling which isn't ideal for offline communication, what you want instead is to design your write APIs so they're One Way/Asynchronous so you can implement a message queue on the client which queues up Request DTOs and sends them reliably to the server by resending them (using an exponential backoff) until the succeed without error. Then for cases where the client needs to be notified that their request has been processed they can either be done via Server Events or via the client long-polling the server checking to see if their request has been processed.

Adding a ResourceOwnerPassword client

I need to create a new client for a window app. This app will create (via my own api) a Client in ID4.
Ive used the ConfigurationDbContext in my api to add clients.
When I try to authenticate the client using ResourceOwneerPassword I get an error:
IdentityServer4.AspNetIdentity.ResourceOwnerPasswordValidator | No user found matching username: ...
The documentation for ID4 says to use the Config class and GetClients() etc but only shows how this works for InMemory stores. Im using EF Core. Besides, these clients are added dynamically not statically at startup.
Is there no higher level service than the db context? Do I have to figure out the db structure and add the data myself? Seems very.... non intuitive :/

Best way to define Client to be used with localhost vs domain

so I got my Identity Server project up and running, and am setting up my project to publish. Now, when I define my client in the config for IS4, I suppose I will have to set my redirect urls to my publish domain, something like this:
new Client{
...
RedirectUris = { "localhost:5002/signin-oidc", "myclient.com/signin-oidc" }
...
}
Is including the localhost and domain the right way to do this?
I am thinking it would be ok since an attacker would have to have my client secret in order to login. Or is it better to set up two separate clients (eg. 'client' and 'client_local'), and request the appropriate client at startup?
There are two ways:
1) Use Configuration File: You can store the clients in a JSON file and load them during startup. Use different JSON files for different environments.
Example. clients.Development.json for Development and clients.Production.json in Production environment; However, The clients will be In Memory Clients and any changes in clients configuration will require a reboot of your application.
2) Use Persistent Storage: Use a database server to store configuration and operational data. A local database for development and a database for production use.
See this docs, The example uses Entity Framework for persistent storage but you're bound to Entity Framework or any ORM. You can opt to write your own Data Access Layer for IdentityServer. This will allow you to change client configurations without restarting your application as the data will be retrieved from a database.

Improving mobile aps client server communication efficiency and data availability in offline mode

My question is about how to store data which once was received online and still can be processed after the mobile device got offline and/or was restarted.
I'm using AngularJS with Ionic (PhoneGap) for building apps. But my question is not explicitly adressing these technologies.
Best practices, patterns or algorythms would be very helpful to me or even some useful articles or key words.
1) The most simple challenge is to make my app more user-friendly by making its functionality usable not only if the device is online but also in offline mode. In my case this implies that I have to make the last fetched online data available for later use (while device is offline and also after restarting the device!).
2) A bit more difficult is to reduce the communication costs by only synchronizing the server side changed data when the device reconnects to the internet.
3) Entities can also be produced on client side while the device is offline and they must get synchronized to the server too. There are no potential risks of conflicts because the users don't share Entities with write access.
4) I use Googles and Apples push services to inform the devices about newer entity versions, which should get updated on client side. So polling isn't needed.
Client side technologies: Javascript, AngularJS Framework, Ionic Framework, SQLite (WebSQL) or IndexedDB, PhoneGap (Cordova)
Server side technologies: Java EE, JPA, MySQL
Data Format and communication: JSON over REST / http, Googles and Apples push services for server-to-client messaging
1) Store the needed data inside a local SQLite database, and pull it out when the app starts/resumes.
2)In the MySQL database you need a table that creates new entries when you update/change/create content. You would need to store an id and a timestamp (maybe a boolean value if the content was deleted).
On the device you would make a request to the server to send the data from that table and compare it with the locally stored data. If there is a new id or the timestamp has change make a new request to pull the updated data.
3)Store the created data locally with a flag that it isn't synced with the server. When the device then goes online again check for not synced flags and send the data to the server with an identifier to know which device it comes from and where to save it.
4)See 2)
You could make a Java script which checks every x minutes for updated entries and send an automatic push Notification with it. What you would need is 2 tables, one with the newest updates and one with the updates that got pulled by the device(just id's and timestamps, not all the data).
I hope this was helpful, if something new comes to my mind I will update this answer.

I am using Http Form Adapter in Ping Federate. How to get user attributes from SAML Response?

Http Form adapter serves as an authentication service in my application. I have not implemented any application on the Identity Provider to get user inputs.
Therefore, on successful authentication, SP verifies the user's signature and redirects to the application. At my target Resource, I receive an open token. Is it still possible to utilize the open Token Jar to read the user attributes from OTK?
**Note: ** In Service Provider, I use open token Adapter.
Also, please let me know if there is any other possible way of getting the user attributes other than using the open token adapter/http form adapter.
Thanks.
There are numerous SP Adapters you can choose to use for your last mile integration with your application. The OpenToken Adapter is just one of them. If your application is in Java and you are using the SP OpenToken Adapter, then you would most likely use the Java OpenToken Agent implementation within your application to read the OTK (documented in the Java Integration Kit). If you look at the Add Ons list, there are actually 3 flavors of OTK Agents (.NET, Java and PHP from PingID. Ruby on Rails and Perl are available via respective Open Source repositories).
However, you are not limited to OpenToken Adapters. The Agentless Integration Kit is also very popular for SP/last-mile integration with PingFederate.
Unfortunately, the question is just too open ended for the Stackoverflow format. I would suggest talking to your Ping Identity Solution Architect who can help steer you in the right direction and ask the necessary follow-up questions on your use case.
If understand the question correctly, you desire attributes to be fulfilled that the web application can read and utilize. This starts with the SP Connection configuration. I am going to assume you are using Active Directory and already configured that data source along with the Password Credential Validator (PCV) for the HTML Form IdP Adapter. In the SP Connection you will need to extend the attribute contract to define the values to put into the SAML assertion and then use the Active Directory data source to fulfill the attributes. When the SAML assertion is received by the PingFederate SP role server, the SP Adapter maps the attribute values from the SAML assertion into the OpenToken. When your application receives the OpenToken, it can read the values.

Resources