Can't POST credentials on OCN client - shareandcharge

I have a problem with OCN-client.
When I call POST http://host/ocpi/2.2/credentials, response:
Role with party_id=ABC and country_code=ES not listed in (truncated)...
Means that I have to register our company?
How I can do it?

Yes, the client will not accept the credentials handshake if you are not already listed in the registry. Running the registry locally will create a new network only existing on your local development environment. To do this, you would also need to point the OCN client to your local network in its configuration. This is good during development, though there will be no other MSPs/CPOs to interact with out of the box (see https://bitbucket.org/shareandcharge/ocn-tools for how you could do that yourself). Once you are ready, you could then access our public test environment via your local OCN client (the default configuration values point to it), or connect to one remotely.

Related

Limit the usage of Kerberos TGTs

I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html

How to make MongoDb (local database) also be accessible via internet the same time

my plan of setup is to make the local database accessible to the local computers because of heavy manipulation of data and it needs a fast response, but at the same time, I wanted to access it via internet when I'm away from the local network. is this possible?
Currently Using MERN stack
I tried MLAB but the response of data is pretty slow
Thank you in advance
it looks like you're going down a dangerous path. MongoDB comes with local authentication and a standard 27017 port. To make it available online you need to
remove authentication (which is not on by default)
change or remove the bindIp option
ensure the port is not blocked by firewall
This can be done in the config file.
https://docs.mongodb.com/manual/reference/configuration-options/
However, what you really want to do, probably to create routes within express so that users can communicate with your mongo is a structured and safe manner. More information in this here http://mean.io/2017/10/31/getting-started-mean-io/

Best way to define Client to be used with localhost vs domain

so I got my Identity Server project up and running, and am setting up my project to publish. Now, when I define my client in the config for IS4, I suppose I will have to set my redirect urls to my publish domain, something like this:
new Client{
...
RedirectUris = { "localhost:5002/signin-oidc", "myclient.com/signin-oidc" }
...
}
Is including the localhost and domain the right way to do this?
I am thinking it would be ok since an attacker would have to have my client secret in order to login. Or is it better to set up two separate clients (eg. 'client' and 'client_local'), and request the appropriate client at startup?
There are two ways:
1) Use Configuration File: You can store the clients in a JSON file and load them during startup. Use different JSON files for different environments.
Example. clients.Development.json for Development and clients.Production.json in Production environment; However, The clients will be In Memory Clients and any changes in clients configuration will require a reboot of your application.
2) Use Persistent Storage: Use a database server to store configuration and operational data. A local database for development and a database for production use.
See this docs, The example uses Entity Framework for persistent storage but you're bound to Entity Framework or any ORM. You can opt to write your own Data Access Layer for IdentityServer. This will allow you to change client configurations without restarting your application as the data will be retrieved from a database.

expose local webserver behind dynamic IP

I've a simple webserver bound on 0.0.0.0:3000 on my machine which works as intended on local networks. By local network I mean to say, if my mobile or any other device is on the same network, it can access local webserver by going to the IP assigned to my machine and adding port 3000 to it. Eg 192.168.1.4:3000.
Now I've to expose it to the internet but not through some sort of 3rd party application like ngrok, localtunnel or browserSync. I know that these applications work perfectly, but since I've my own pet project of controlling home appliances, I don't want to rely on availability of 3rd party services. So the current state is, I cannot control it through the internet. Keeping in mind I don't have a static IP otherwise this would've been easier.
I already have a vps and a domain name assigned to it. I can send my currently allocated ip address (since it is dynamic), by using getifaddrs, to my server and keep track of it. But how do I expose my local server to the internet through it? Those 3rd party applications assign some sort of subdomains to each exposed server, and I'll be able to assign subdomains too, but I'm still not getting any way to expose the local webserver. Any help would be appreciated, thankyou :)
Step one, you need to expose your webserver at your internet access router.
Typically this requires you to configure port forwarding for (in your case) port 3000.
With this done, any client could access your service via (current external dynamic ip):3000
Step two, you need to dynamically map a fixed DNS name to your current dynamic ip. There are of course third party services (such as DynDNS) that would help you map yourfavoritename.dyndns.org to that ever-changing ip address.
If you want to do the latter without 3rd party, you need to have some static (web) server somewhere and could proceed as follows:
Clients visit http://www.yourstaticserver.example/ and that server redirects them to (current dynamic ip):3000.
Of course, for this to happen, your static server needs to know the dynamic ip and needs no find out about changes to it.
To this end, you could have your internal server contact the static server on a regular interval (such as once a minute), say, have it access http://www.yourstaticserver.example/some-secret-special-page and the static server always stores the REMOTE_ADDR of such a request (preferably with some authorization!) for its future redirections.
Actually, there is a step zero before step one: Be aware that exposing your server to the Internet means that you expose your server to the Internet. So I hope you have invested enough thought into security.

Is there any open source for Ip Tunnel?

I need one server to receive ip requests from clients(there are not in the same intranet), and I can
route all the response packets to a special gateway server, and then I send the response packages to
clients after some processing. it is like VPN, but I want to do some development based one
opensource project, so i can control it myself.
any suggestion? thanks!
There is OpenVPN which is as the name already suggests open source.
You could set up the server on the local one as a kind of proxy (or reverse-proxy depending on your viewpoint) and have the clients connect to it.
It depends what protocol you're using, maybe it has explicit proxy capability or you can get an existing proxy program, or just proxy it using a simple socket forwarder program.

Resources