According to JackRabbit Oak official documentation, one of the ways of creating Repository instance is to create a MicroKernel object and pass it to JCR's parameterized constructor like this:
MicroKernel kernel = ...;
Repository repository = new Jcr(kernel).createRepository();
But looking at the JCR class javadocs, i can't find any constructor which takes an object of type MicroKernel.
So my questions is :
How can we get a repository object using MicroKernel in JackRabbit Oak(not
JackRabbit 2.0).
Note: I want a repository which uses normal file system as the content storage medium.
The documentations is unfortunately lagging behind in some areas. The MicroKernel interface has been superseded by the NodeStoreinterface in Oak.
For file system persistence you'd use the SegmentNodeStore. Have a look at how the respective test cases set up the repository.
In a nutshell:
File directory = ...
NodeStore store = new FileStore(directory, 1, false);
Jcr jcr = new Jcr(new Oak(new SegmentNodeStore(store)));
Try to use the MicroKernelImpl public no-arg constructor to create an in-memory kernel instance:
MicroKernel kernel = new MicroKernelImpl();
Repository repository = new Jcr(kernel).createRepository();
Alternativelly, you can use the OAK class entry to create a Repository:
MicroKernel kernel = new MicroKernelImpl();
Repository repo = new Oak(kernel).createRepository();
Related
I'm new to Aerospike..
How to create a New Namespace and new set..
I have gone thru some docs and videos but I didn't find any useful thing.
i have read somewhere which is 5 years old blog, i.e. thru config file only we can create namespace and set.
is that true or any other commands are there.
In order to create a namespace you'll need to modify the aerospike.conf file since namespaces cannot be created dynamically.
By default the "test" namespace is included in the aerospike.conf file (located in /etc/aerospike/aerospike.conf).
You can read more about Aerospike configuration and Namespace configuration.
Sets can be "created" dynamically (it is a logical hierarchy) so you don't have to "create" sets you can just specify the set you want to write/update/delete/read from in the operation.
If you are new to Aerospike I suggest to use the Interactive Tutorials (currently available for Java, Python and Spark) or the Client Libraries documentation to see some code examples.
I have developed a WPF plugin-based application whereby plugin assemblies are dynamically loaded into a "host" application and both the host application and its plugins reference common assemblies.
If at some point in the future I wish to tweak a class of the common assembly, I don't want to have to recompile all of the plugins in order for them to work within a host application which might be running with a different version of common assemblies.
Scenario:
There are 2 versions of the common assemblies (1.1.0.0 and 1.2.0.0), signed and deployed the Global Assembly Cache. Each contains a class "Foo" which remains untouched between versions. Due to the architecture, Foo has to stay within the common assembly.
The host application is built against common version 1.1.0.0 and provides a base class from which view models within all plugins can derive; its function is very UI centric, so it has to stay within the host application.
Plugin #1 is built against common version 1.1.0.0
Plugin #2 is built against common version 1.2.0.0
Relevant third party components used: Microsoft's Prism and ServiceLocation and Castle (Windsor)
Common:
public class Foo
{
// Some useful properties
}
Host:
public class ViewModelBase<T> where T : Foo
{
// Some useful behaviour
}
Plugin #1:
public class ViewModel : ViewModelBase<Foo>
{
}
Plugin #2:
public class ViewModel : ViewModelBase<Foo>
{
}
ISSUE:
Upon loading Plugin #2, I receive a ReflectionTypeLoadException due to the fact that the Foo class of version 1.1.0.0 is not considered the same as the Foo class of version 1.2.0.0 and so using Foo as the type parameter for the view model in Plugin #2 is invalid.
IDEAS:
Using a more immutable "core" common assembly to contain the Foo class (but in the end, this would entail taking too many classes from too many different assemblies) and so isn't an option
Using assembly redirects (but forcing plugins to use the same version of common assemblies as the host application does not guarantee that a plugin that works during development will continue to work post-deployment, unless rules are put in place that ensure no breaking changes can be introduced using obsolescence attributes)
Has anyone managed to get a truly side-by-side (not to be confused with .NET framework side-by-side) scenario like this working (be it in single or multiple app domains)?
Thanks very much,
Rob
We ended getting around the issue using assembly redirects in the app.config file of the Host application.
I am working on a new web application using Scala with Lift. I want to make it reusable so others might install it on their own servers for their own needs. I come out of a PHP background where it is common practice to create an install form asking for database connection details. This information is stored in a configuration file and used by the rest of the PHP application for connecting to the database. It is extremely convenient for the user because everything is contained within the directory storing the PHP files. They are free to define everything else. My Java/Scala background has all been enterprise work where an application was only intended to run on the database we setup for it. It was not meant to be installed on others' web servers or with different databases.
So my question is how is this typically done for the Java/Scala world? If there are open source applications implementing the mainstream solution, feel free to point me to those too.
I use this to set up the database:
val vendor =
new StandardDBVendor(
Props.get("db.driver") openOr "org.h2.Driver",
Props.get("db.url") openOr "jdbc:h2:mem:db;AUTO_SERVER=TRUE",
Props.get("db.user"),
Props.get("db.password"))
LiftRules.unloadHooks.append(vendor.closeAllConnections_! _)
DB.defineConnectionManager(DefaultConnectionIdentifier, vendor)
The 'Props' referred to will then be (by default) in the file default.props in the props directory in resources.
Updated: This is what I do on servers in production. With 'Props.whereToLook' you provide a function that retrieves an input stream of the configuration. This can be a file as in the example below or you could for example fetch it over network socket.
You will probably let the application to fail with an error dialog.
val localFile = () => {
val file = new File("/opt/jb/myapp/etc/myapp.props")
if (file.exists) Full(new FileInputStream(file)) else Empty
}
Props.whereToLook = () => (("local", localFile) :: Nil)
I am not sure if I am missing your points.
By default, Lift use Scala source file(Boot.scala) to configure all the settings, because Lift doesn't wanna introduce other language into the framework, however you can override some of the configurations using a .properties file.
In Java/Scala world, we use .properties file. It's just a plain text file used for configuration or localization etc,just like text configuration files in PHP.
Lift Framework has it's default support for the external database configuration files, you check out the code in Boot.scala, that's if a .properties file existed, the database will initialized using the connection configuration, if it doesn't, it will use the source file configuration.
I have a web application with some data in its datastore. I have just finished another version of it, in which i changed one of the persistent classes. Basically, there is a class called "Node" (which represents a node in a hierarchy tree), that used to have it's author as a
private CmsUser author;
and now it stores its author as
private Key author.
When i deployed that second version to the server (as another version), it didnt work (which is predictable).
Is there any way to make it work? Or do i have to create another entity instead of the Node thing and write a piece of code that would change all my old nodes into new ones?
Thanks.
You will have to write some code that loads each Node in its old form, then saves it in the new form.
Since it looks like you are using java, you can do this with the low level API. If you were using Python, there is a trick you can do with Expando. See here
You might want to try the new Mapper API to handle looping through all your entities.
I'm using Prism in my WPF application and up to now, I've been loading the modules via var moduleCatalog = new ConfigurationModuleCatalog();. I'd like to get the module catalog from a database. The Prism documentation indicates that this is possible, but it doesn't go into any details.
Has anyone done this and can provide some guidance?
This is a theoretical possibility, but it's not in any samples I've seen.
Basically what you'd do is either base64 encode the DLLs / Files into the database or zip them up and store them in one blob. You'd download them in your bootstrapper and copy them locally (in a temp directory) and then allows them to load normally from the filesystem using the DirectoryModuleCatalog. If you wanted it to be a bit more elegant, you could write your own ModuleCatalog that encapsulates this logic.
This is very similar to what I do... I actually download a zip file of all of the modules from a website at launch time and unzip them and load them with the DirectoryModuleCatalog.
You can write your own ModuleCatalog implementation by implementing IModuleCatalog. Your implementation can then populate the catalog by any means you define.
You could also use the CreateFromXAML overload that accepts a Stream and implement a webservice that delivers the ModuleCatalog in XAML over HTTP.