How do we obtain the Google App Engine safe-URL key in Java for a different appId and namespace? - google-app-engine

When I just need to do it for my application without namespace I can use the following code:
final Key myKey = KeyFactory.createKey(kind, id);
final String safeUrlKey = KeyFactory.keyToString(myKey);
Unfortunately when I need to do it for a different appId or namespace I don't find any way to do it in Java.
In python for example I can use the following code:
new_key = db.Key.from_path(entity, id, _app=application_id, namespace=namespace)
return str(new_key)
But in Java this doesn't seem to be available.
Any idea on how I can do this?

The App Engine SDK does indeed try to prohibit this, as evidenced by the lack of public classes/methods that can handle app IDs and namespaces. Even in python this is discouraged by the underscore prefix on the _app keyword argument. This is probably because App Engine apps are meant to be well-contained within their project.
It is possible to use reflection to workaround these barriers, but only on the Standard Java 8 runtime (which is currently in beta). The Standard Java 7 runtime prohibits reflecting non-accessible methods. (If you're using App Engine Flex I suspect you'll be ok too, although I haven't tested that.)
If you are already using Java 8 or willing to switch, I was able to create keys for arbitrary app IDs/namespaces with the following:
Key createKey(String appId, String namespace, String kind, long id) {
try {
Class<?> appNsClazz = Class.forName("com.google.appengine.api.datastore.AppIdNamespace");
Constructor<?> constructor = appNsClazz.getConstructor(String.class, String.class);
constructor.setAccessible(true);
Constructor<Key> keyFactory = Key.class.getDeclaredConstructor(String.class,
Key.class, long.class, String.class, appNsClazz);
keyFactory.setAccessible(true);
Object appNs = constructor.newInstance(appId, namespace);
return keyFactory.newInstance(kind, /* parent key */ null, id, /* name */ null, appNs);
} catch (ClassNotFoundException | NoSuchMethodException |
InvocationTargetException | InstantiationException |
IllegalAccessException e) {
throw new RuntimeException(e);
}
}
If you will be running this code often it would be good to cache the Constructor instances, and the appNs instance if possible to avoid the performance overhead of reflection.
Please do note that this code will not work on the Standard Java 7 runtime.

Finally I was able to make it work with the following code (looking at how KeyFactory was doing it internally):
public static String getSafeUrlFromId(final String kind, final Long id, final String applicationId, final String namespace) {
final com.google.storage.onestore.v3.OnestoreEntity.Reference myMessage = new com.google.storage.onestore.v3.OnestoreEntity.Reference();
final Element pathElement = new Element().setType(kind).setId(id);
final Path path = myMessage.getMutablePath();
path.addElement(pathElement);
myMessage.setPath(path);
if (namespace != null){
myMessage.setNameSpace(namespace);
}
myMessage.setApp(applicationId);
final BaseEncoding encoder = BaseEncoding.base64Url();
final String alphanumericKey = encoder.omitPadding().encode(myMessage.toByteArray());
return alphanumericKey;
}

Related

meaning of parameters in parse method OWLAPI (building an AST)

I was looking for a good parser for OWL ontologies - initially in Python since I have very limited experience with Java. It seems that OWLAPI is the best choice as far as I can tell, and well, it is Java.
So, I am trying to parse an .owl file and build the AST from it. I downloaded owlapi and I´m having problems with it since it doesn´t seem to have much in terms of documentation.
My very basic question is what do the two first parameters of - say - OWLXMLParser(), stand for:
- document source: Is this the .owl file read as a stream (in getDocument below)?
- root ontology: what goes here? initially I thought that this is where the .owl file goes, seems not to be the case.
Does the parse method construct the AST or am I barking up the wrong tree?
I´m pasting some of my intents below - there are more of them but for I´m trying to be less verbose :)
[The error I´m getting is this - if anyone cares - although the question is more fundamental:
java.lang.NullPointerException: stream cannot be null
at org.semanticweb.owlapi.util.OWLAPIPreconditions.checkNotNull(OWLAPIPreconditions.java:102)
at org.semanticweb.owlapi.io.StreamDocumentSourceBase.(StreamDocumentSourceBase.java:107)
at org.semanticweb.owlapi.io.StreamDocumentSource.(StreamDocumentSource.java:35)
at testontology.testparsers.OntologyParser.getDocument(App.java:72)
at testontology.testparsers.OntologyParser.test(App.java:77)
at testontology.testparsers.App.main(App.java:58)]
Thanks a lot for your help.
public class App
{
public static void main( String[] args )
{
OntologyParser o = new OntologyParser();
try {
OWLDocumentFormat p = o.test();
} catch (Exception e) {
e.printStackTrace();
}
}
}
class OntologyParser {
private OWLOntology rootOntology;
private OWLOntologyManager manager;
private OWLOntologyDocumentSource getDocument() {
System.out.println("access resource stream");
return new StreamDocumentSource(getClass().getResourceAsStream(
"/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
}
public OWLDocumentFormat test() throws Exception {
OWLOntologyDocumentSource documentSource = getDocument();
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology rootOntology = manager.loadOntologyFromOntologyDocument (new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl")));
OWLDocumentFormat doc = parseOnto(documentSource, rootOntology);
return doc;
}
private OWLDocumentFormat parseOnto(
#Nonnull OWLOntologyDocumentSource initialDocumentSource,
#Nonnull OWLOntology initialOntology) throws IOException {
OWLParser initialParser = new OWLXMLParser();
OWLOntologyLoaderConfiguration config = new OntologyConfigurator().buildLoaderConfiguration();
//// option 1:
//final OWLOntologyManager managerr = new OWLOntologyManagerImpl(new OWLDataFactoryImpl(), new ReentrantReadWriteLock(true));
//final IRI iri = IRI.create("testasdf");
//final IRI version = IRI.create("0.0.1");
//OWLOntologyDocumentSource source = new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
//final OWLOntology onto = new OWLOntologyImpl(managerr, new OWLOntologyID(iri,version));
//return initialParser.parse(initialDocumentSource, onto, config);
////
//option 2:
return initialParser.parse(initialDocumentSource, initialOntology, config);
}
Click here to Reply or Forward
15.32 GB (13%) of 115 GB used
Manage
Terms - Privacy
Last account activity: 1 hour ago
Details
The owlapi parsers are designed for use by the OWLOntologyManager implementations, which are managed (unless you're writing a new owlapi implementation) by the OWLManager singleton. There's plenty of examples on how to use that class in the wiki pages.
All parsers included in the owlapi distribution are meant to create OWLAxiom instances in an OWLOntology, not create an AST of an owl file - the syntactic shape of the files depends on the specific format, on the preferences of the writer, and so on, while the purpose of the api is to provide ontology manipulation functionality to the caller. The details of the output format can be tweaked but exposing them to the caller is not part of the main design.

Hystrix Javanica : Call always returning result from fallback method.(java web app without spring)

I am trying to integrate Hystrix javanica into my existing java EJB web application and facing 2 issues with running it.
When I try to invoke following service it always returns response from fallback method and I see that the Throwable object in fallback method has "com.netflix.hystrix.exception.HystrixTimeoutException" exception.
Each time this service is triggered, HystrixCommad and fallback methods are called multiple times around 50 times.
Can anyone suggest me with any inputs? Am I missing any configuration?
I am including following libraries in my project.
project libraries
I have setup my aspect file as follows:
<aspectj>
<weaver options="-verbose -showWeaveInfo"></weaver>
<aspects>
<aspect name="com.netflix.hystrix.contrib.javanica.aop.aspectj.HystrixCommandAspect"/>
</aspects>
</aspectj>
Here is my config.properties file in META-INF/config.properties
hystrix.command.default.execution.timeout.enabled=false
Here is my rest service file
#Path("/hystrix")
public class HystrixService {
#GET
#Path("clusterName")
#Produces({ MediaType.APPLICATION_JSON })
public Response getClusterName(#QueryParam("id") int id) {
ClusterCmdBean clusterCmdBean = new ClusterCmdBean();
String result = clusterCmdBean.getClusterNameForId(id);
return Response.ok(result).build();
}
}
Here is my bean class
public class ClusterCmdBean {
#HystrixCommand(groupKey = "ClusterCmdBeanGroup", commandKey = "getClusterNameForId", fallbackMethod = "defaultClusterName")
public String getClusterNameForId(int id) {
if (id > 0) {
return "cluster"+id;
} else {
throw new RuntimeException("command failed");
}
}
public String defaultClusterName(int id, Throwable e) {
return "No cluster - returned from fallback:" + e.getMessage();
}
}
Thanks for the help.
If you want to ensure you are setting the property, you can do that explicitly in the circuit annotation itself:
#HystrixCommand(commandProperties = {
#HystrixProperty(name = "execution.timeout.enabled", value = "false")
})
I would only recommend this for debugging purposes though.
Something that jumps out to me is that Javanica uses AspectJ AOP, which I have never seen work with new MyBean() before. I've always have to use #Autowired with Spring or similar to allow proxying. This could well just be something that is new to me though.
If you set a breakpoint inside the getClusterNameForId can you see in the stack trace that its being called via reflection (which it should be AFAIK)?
Note you can remove commandKey as this will default to the method name. Personally I would also remove groupKey and let it default to the class name.

Authenticate user in WinForms (Nothing to do with ASP.Net)

Note: Cross-posted to ServerFault, based on comments.
Intro
I need to password protect some actions in my application, such as loading/saving files, clicking check-boxes, etc. This is a standard C# .Net 4.0, WinForms application which will run on Windows 7 in a corporate network.
I was about to roll my own very basic system (read obfuscation with wide open backdoors) with a text file of users/passwords/permissions (hashed and salted) until after some searching I found what looks like a
tantalizingly simple approach , but I'm having trouble finding a good tutorial on Roles that isn't about ASP.NET.
Question
So does anyone know of one or more tutorials that show me how to:
Create a Windows User/Group and give that User/Group a Role or Permission.
Note that I'm testing this from my company's networked laptop, but will deploy it on the customer's corporate network (Not sure if this is an issue, or how tricky this will get).
Create winforms/console app sample with even just a single method that prints "Hello World" if I'm authenticated or throws an exception if I'm not?
I've never done Network Admin or anything related and I keep reading about Active Directory and Local Users Vs Networked Users... I was hoping for an approach where I could build to an Interface and just ask Windows if the current user has permission ABC and not care too much about how Windows figured that out. Then I can make a concrete implementation for each Local/Network/ActiveDirectory/etc. use case as required (or if required... as I don't even know that right now).
Background
- read if interested, but not required to answer question
Just to make sure I'm going in the right direction here, basically I need/want to test this on my development PC to make sure it's going to have a good end-user experience for my customer. The problem is that currently they run an Auto-login script for each computer that runs my application and there are several different operators that use my application throughout the day. The customer wants password protection on certain features of my app and only provide that to certain operators. I have no problem fitting this in, as I've expected the request for a while, I just haven't ever programmed authentication before.
I think it's worthwhile to convince my customer to give each operator their own network account and assign whatever permissions they want to that operator or group, in case they need to fire somebody, change permissions, etc. It also means I just open several options for them and they can group those permissions however they see fit based on internal corporate policies, which I really shouldn't have to be worried about (but will be if I have to roll my own, as they're IT department knows almost nothing of my application).
From what I can tell it also makes my life a lot easier by not having to deal with hashing passwords and encryption, etc. and just handle which Role is required to click this or that button.
First of all, you'd have to determine, if you really want a simple role-based-authentication (you may want to read: http://lostechies.com/derickbailey/2011/05/24/dont-do-role-based-authorization-checks-do-activity-based-checks/)
If you're sure it's absolutely sufficient, you're already on the right way with the SO link you provided in your question. It's kind of confusing that there is no support of 'roles' by default in Windows, but there are groups. Groups can be local or remote (e.g. ActiveDirectory), so an admin could assign users to certain groups, that are specific for your application (for an example look here: http://msdn.microsoft.com/en-us/library/ms731200(v=vs.110).aspx)
One key is: You have to prepare your application's central principal, hence fill it with roles, supported for the current user.
Therefore, On the very startup of your application you then check the current active user and set your application wide principal and role(s). This may look like this (just a very simple example):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security;
using System.Security.Principal;
using System.Text;
using System.Threading;
namespace WindowsPrincipalTrial
{
public class Program
{
// you could also move these definitions to a config file
private static IDictionary<string, string> _groupRoleMappings = new Dictionary<string, string>()
{
{"MYAPPUSERGRP", MyRoles.Standard},
{"MYAPPSUPPORTGRP", MyRoles.Extended},
{"MYAPPADMINGRP", MyRoles.Admin},
};
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var allRoleNames = getGroupCorrespondingRoles(windowsId);
var newPrincipal = new GenericPrincipal(windowsId, allRoleNames);
Thread.CurrentPrincipal = newPrincipal;
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
private static string[] getGroupCorrespondingRoles(WindowsIdentity id)
{
// you also could do this more elegant with LINQ
var allMappedRoleNames = new List<string>();
string roleName;
foreach (var grp in id.Groups)
{
var groupName = grp.Translate(typeof(NTAccount)).Value.ToUpper();
if (_groupRoleMappings.TryGetValue(groupName, out roleName))
{
allMappedRoleNames.Add(roleName);
}
}
return allMappedRoleNames.ToArray();
}
}
public static class MyRoles
{
public const string Standard = "standard_role";
public const string Extended = "extended_role";
public const string Admin = "admin_role";
}
}
Then your Application-Principal is set up.
Now you could check access in your code like this:
public void DoSomethingSpecial()
{
if (Thread.CurrentPrincipal.IsInRole(MyRoles.Extended))
{
// do your stuff
}
else
{
// maybe display an error
}
}
Or more drastically:
public void DoSomethingCritical()
{
var adminPermission = new PrincipalPermission(null, MyRoles.Admin);
adminPermission.Demand();
// do stuff
}
what is possible even declarative, as known from ASP.NET:
[PrincipalPermission(SecurityAction.Demand, Role=MyRoles.Admin)]
public void DoSomethingMoreCritical()
{
// do stuff
}
The ugly thing with the latter two examples is, that they throw exceptions, when the right role isn't hit.
So the mapping between roles and groups you have to do quite at the start of your app, according to the systems you want to use (local groups, AD groups, LDAP groups etc.).
If you, however, prefer authentication with actions and roles, after all, have a look at Windows Identity Foundation and Claims Based Authorization! There are already some ready-to-use frameworks out there (e.g. https://github.com/thinktecture/Thinktecture.IdentityModel).
UPDATE:
When it comes to activity based and thereby claims based authorization, I will try in short, how you could achieve it, by using Thinktecture's IdentityModel.
Generally that approach still uses roles internally, but has a kind of translation layer in between. Thinktecture already encapsulates many things needed. Authorization checks in code are then done via claim permissions. They are technically kind of request for an access to a certain resource. For the sake of simplicity I limit my example for actions only, by using one single default resource (since ClaimPermission doesn't accept an empty resource).
If you want to use action#resource pairs, you'd have to modify the code respectively.
At first you need a ClaimsAuthorizationManager
public class MyClaimsAuthorizationManager : ClaimsAuthorizationManager
{
private IActivityRoleMapper _actionToRolesMapper;
public MyClaimsAuthorizationManager(IActivityRoleMapper mapper)
{
_actionToRolesMapper = mapper;
}
public override bool CheckAccess(AuthorizationContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
try
{
var action = getActionNameFromAuthorizationContext(context);
var sufficientRoles = _actionToRolesMapper.GetRolesForAction(action)
.Select(roleName => roleName.ToUpper());
var principal = context.Principal;
return CheckAccessInternal(sufficientRoles, principal);
}
catch (Exception ex)
{
return false;
}
}
protected virtual bool CheckAccessInternal(IEnumerable<string> roleNamesInUpperCase, IClaimsPrincipal principal)
{
var result = principal.Identities.Any(identity =>
identity.Claims
.Where(claim => claim.ClaimType.Equals(identity.RoleClaimType))
.Select(roleClaim => roleClaim.Value.ToUpper())
.Any(roleName => roleNamesInUpperCase.Contains(roleName)));
return result;
}
// I'm ignoring resources here, modify this, if you need'em
private string getActionNameFromAuthorizationContext(AuthorizationContext context)
{
return context.Action
.Where(claim => claim.ClaimType.Equals(ClaimPermission.ActionType))
.Select(claim => claim.Value)
.FirstOrDefault();
}
}
As you may have guessed, IActivityRoleMapper is an interface for a class, that returns the names of all roles, that include permission for a given action.
This class is very individual and I guess you'll find your way implementing it, because it's not the point here. You could do it by hardcoding, loading from xml or from a database. Also you would have to change/extend it, if you wanted to you action#resource pairs for permission requests.
Then you'd have to change the code in main() method to:
using Thinktecture.IdentityModel;
using Thinktecture.IdentityModel.Claims;
using Microsoft.IdentityModel.Web;
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var rolesAsClaims = getGroupCorrespondingRoles(windowsId)
.Select(role => new Claim(ClaimTypes.Role, role))
.ToList();
// just if you want, remember the username
rolesAsClaims.Add(new Claim(ClaimTypes.Name, windowsId.Name));
var newId = new ClaimsIdentity(rolesAsClaims, null, ClaimTypes.Name, ClaimTypes.Role);
var newPrincipal = new ClaimsPrincipal(new ClaimsIdentity[] { newId });
AppDomain.CurrentDomain.SetThreadPrincipal(newPrincipal);
var roleMapper = new ActivityRoleMapper(); // you have to implement
// register your own authorization manager, so IdentityModel will use it per default
FederatedAuthentication.ServiceConfiguration.ClaimsAuthorizationManager = new MyClaimsAuthorizationManager(roleMapper);
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
Finally you can check access this way:
public const string EmptyResource = "myapplication";
public void DoSomethingRestricted()
{
if (!ClaimPermission.CheckAccess("something_restricted", EmptyResource))
{
// error here
}
else
{
// do your really phat stuff here
}
}
Or again, with exceptions:
private static ClaimPermission RestrictedActionPermission = new ClaimPermission(EmptyResource, "something_restricted");
public void DoSomethingRestrictedDemand()
{
RestrictedActionPermission.Demand();
// play up, from here!
}
Declarative:
[ClaimPermission(SecurityAction.Demand, Operation = "something_restricted", Resource = EmptyResource)]
public void DoSomethingRestrictedDemand2()
{
// dostuff
}
Hope this helps.

Objectify with Cloud Endpoints

I am using appengine cloud endpoints and objectify. I have previously deployed these endpoints before and now I am updating them and it is not working with Objectify. I have moved to a new machine and running latest appengine 1.8.6. Have tried putting objectify in the classpath and that did not work. I know this can work, what am I missing??
When running endpoints.sh:
Error: Parameterized type
com.googlecode.objectify.Key<MyClass> not supported.
UPDATE:
I went back to my old computer and ran endpoints.sh on same endpoint and it worked fine. Old machine has 1.8.3. I am using objectify 3.1.
UPDATE 2:
Updated my old machine to 1.8.6 and get same error as other machine. Leaves 2 possibilities:
1) Endpoints no longer support objectify 3.1
or
2) Endpoints have a bug in most recent version
Most likely #1...I've been meaning to update to 4.0 anyways...
Because of the popularity of Objectify, a workaround was added in prior releases to support the Key type, until a more general solution was available. Because the new solution is available, the workaround has been removed. There are two ways you can now approach the issue with the property.
Add an #ApiResourceProperty annotation that causes the key to be omitted from your object during serialization. Use this approach if you want a simple solution and don't need access to the key in your clients.
Add an #ApiTransformer annotation that provides a compatible mechanism to serialize/deserialize the field. Use this approach if need access to the key (or a representation of it) in your clients. As this requires writing a transformer class, it is more work than the first option.
I came up with the following solution for my project:
#Entity
public class Car {
#Id Long id;
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
Key<Driver> driver;
public Key<Driver> getDriver() {
return driver;
}
public void setDriver(Key<Driver> driver) {
this.driver = driver;
}
public Long getDriverId() {
return driver == null ? null : driver.getId();
}
public void setDriverId(Long driverId) {
driver = Key.create(Driver.class, driverId);
}
}
#Entity
public class Driver {
#Id Long id;
}
I know, it's a little bit boilerplate, but hey - it works and adds some handy shortcut methods.
At first, I did not understand the answer given by Flori, and how useful it really is. Because others may benefit, I will give a short explanation.
As explained earlier, you can use #ApiTransformer to define a transformer for your class. This would transform an unserializable field, like those of type Key<myClass> into something else, like a Long.
It turns out that when a class is processed by GCE, methods called get{fieldName} and set{FieldName} are automatically used to transform the field {fieldName}. I have not been able to find this anywhere in Google's documentation.
Here is how I use it for the Key{Machine} property in my Exercise class:
public class Exercise {
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
public Key<Machine> machine;
// ... more properties
public Long getMachineId() {
return this.machine.getId();
}
public void setMachineId(Long machineId) {
this.machine = new Key<Machine>(Machine.class, machineId);
}
// ...
}
Others already mentioned how to approach this with #ApiResourceProperty and #ApiTransformer. But I do need the key available in client-side, and I don't wanna transform the whole entity for every one. I tried replacing the Objectify Key with com.google.appengine.api.datastore.Key, and it looks like it worked just fine as well in my case, since the problem here is mainly due to that endpoint does not support parameterized types.

Enterprise Library 5: Creating instances of Enterprise Library objects

I am using Enterprise Library 5.0 in my win-form Application.
1. Regarding creating instances of Enterprise Library objects
What is the best way to Resolve the reference for Logging / exception objects? In our application, we have different applications in solution. So Solutions have below project:
CommonLib (Class Lib)
CustomerApp (winform app)
CustWinService (win service proj)
ClassLib2 (class Lib)
I have implemented logging / exceptions as below in CommonLib project. Created a class AppLog as below:
public class AppLog
{
public static LogWriter defaultWriter = EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
public static ExceptionManager exManager = EnterpriseLibraryContainer.Current.GetInstance<ExceptionManager>();
public AppLog()
{
}
public static void WriteLog(string LogMessage, string LogCategories)
{
// Create a LogEntry and populate the individual properties.
if (defaultWriter.IsLoggingEnabled())
{
string[] Logcat = LogCategories.Split(",".ToCharArray());
LogEntry entry2 = new LogEntry();
entry2.Categories = Logcat;
entry2.EventId = 9007;
entry2.Message = LogMessage;
entry2.Priority = 9;
entry2.Title = "Logging Block Examples";
defaultWriter.Write(entry2);
}
}
}
And then I used Applog class as below for logging and exception in different projects:
try
{
AppLog.WriteLog("This is Production Log Entry.", "ExceCategory");
string strtest = string.Empty;
strtest = strtest.Substring(1);
}
catch (Exception ex)
{
bool rethrow = AppLog.exManager.HandleException(ex, "ExcePolicy");
}
So its the correct way to use Logging and Exception? or any other way i can improve it?
2. Logging File Name dynamic
In logging block, we have fileName which need to be set in app.config file. Is there a way I can assign fileName value dynamically through coding? Since I don't want to hard code it in config file and paths are different for production and development environment.
Thanks
TShah
To keep your application loosely coupled and easier to test, I would recommend defining separate logging and exception handling interfaces, then having your AppLog class implement both. Your application can then perform logging and exception handling via those interfaces, with AppLog providing the implementation.
You can have a different file name set per environment using config transforms, which I believe you can use in a winforms application by using Slow Cheetah.

Resources