JCR checkin/checkout operations - jackrabbit

I'm just starting to work with JCR (apache jackrabbit), i want to ask simple question (because i coudn't find good tutorial for it):
So for what do i need Node.checkout and Node.checkin methods?
What do they mean?
Thx

The 'checkin' and 'checkout' methods have to do with how a JCR repository tracks the versions of content. The 'checkout' method signals to the repository that your client application is (likely) going to be modifying some versionable content. The 'checkin' methods signals to the repository that your client application has made changes to the versionable content, and that the repository should record those changes (e.g., the new version) in the version history.
For example, let's imagine that we want to create a node at '/a/b/c' that is versionable. This is done using something like the following code:
To create content, you simply set the 'mix:versionable' mixin (or use a mixin or primary node type that inherits from 'mix:versionable') on a node and then save your changes. At that point, the repository will initialize the version history for that node (or subgraph).
Node b = session.getNode("/a/b");
Node newNode = b.addNode("c");
newNode.addMixin("mix:versionable");
// set other properties and create children
session.save();
Upon 'session.save()', the repository will note the 'mix:versionable' mixin and will initialize the version history for the content at '/a/b/c'. From this point on, your client application uses 'checkout' and 'checkin' to add new versions to the history.
VersionManager vm = session.getWorkspace().getVersionManager();
vm.checkout("/a/b/c");
// make some changes at/under '/a/b/c'
session.save();
// Can make more changes and save, if desired
vm.checkin("/a/b/c");
When 'checkin' is called, the repository will take the current state of '/a/b/c' and will add it to the version history. Of course, this process is repeated each time you want to make changes to versionable nodes.

In Jackrabbit 2.x, the methods on Node are deprecated. Instead, use VersionManager.checkout / checkin (they are available in Jackrabbit 1.x as well). Here is some sample code:
Node test = s.getRootNode().addNode("test");
Node t1 = test.addNode("t1");
t1.addMixin("mix:versionable");
s.save();
VersionManager vm = s.getWorkspace().
getVersionManager();
vm.checkout("/test/t1");
t1.setProperty("data", "Hello" + i);
s.save();
vm.checkin("/test/t1");

Related

Is there a way limit the members of a camel-cluster (using KubernetesClusterService) using a pod selector (selection)

I am programmatically creating my CamelContext and successfully creating the ClusterServiceProvider using the KubernetesClusterService implementation. When running in my kubernetes cluster, it is electing a leader and responding appropriately to "master:" routes. All good.
However, I would like to limit the Pod/Deployments that are detected in the cluster member negotiation/inspection. It currently has knowledge of (finds) every Pod in the cluster namespace which includes completely unrelated deployments/instances.
The overall question is how to select what Pods/Deployments should be included in the particular camel cluster?
I see in the KubernetesLockConfiguration there is an attribute called clusterLabels however it is unclear to me how or what that is used for. When I do set the clusterLabels to something syntactically common in kubernetes (e.g. app -> my-app), then the cluster finds no members.
I mention that I am doing this programmatically as there is no spring-boot or other commonly documented configuration of camel involved. Running in Play-Framework Scala.
First, I create a CamelContext
val context = new DefaultCamelContext()
val rb = new RouteBuilder() {
def configure() = {
val policy = ClusteredRoutePolicy.forNamespace("default")
from(s"master:ip:timer://master-timer?fixedRate=true&period=60000")
.routePolicy(policy)
.bean(classOf[MasterTimer], "execute()")
.log("Current Leader ${routeId}")
}
}
Second, I create a ClusterServiceProvider
import org.apache.camel.component.kubernetes.cluster.KubernetesClusterService
val crc = new ClusteredRouteController()
val service = new KubernetesClusterService
service.setCamelContext(cc)
service.setKubernetesNamespace("default")
//
// if I set clusterLabels here, no camel cluster is realize/created.
// assumption is the my syntax for CamelKubernetes is wrong however
// it is unclear from documentation how to make this work.
//
// if I do not set clusterLabels, every pod in my kubernetes cluster is
// part of a cluster-member (CamelKubernetesLeaderNotifier logs that
// the list of cluster members has changed). So I get completely
// un-related deployments in the context of something that I want
// to specifically related, namely all pods with an "app" label of
// "my-app"
//
service.setClusterLabels(Map("app" -> "my-app").asJava)
crc.setNamespace("default")
crc.setClusterService(service)
camelContext.addService(service)
camelContext.setRouteController(crc)
camelContext.start()
camelContext.getRouteController().startAllRoutes()

Using dispatcher to invoke on another thread - Config/Settings is not threadsafe in .Net Core 3.1

Issue:
I am trying to update the GUI from a second thread.
Unfortunately the label stays the same. At first I thought there is an issue with the dispatcher but it works fine. It appears that the configuration is not threadsafe!
Code to reproduce:
I have a Settings File which is mainly used to keep variables persistent during application relaunches:
this is the Update code:
// get the amount of tickets created for me last week
int amountOftickets = JiraInterface.DayStatisticsGenerator.GetLastWeeksTicketAmount();
config.Default.Lastweekstickets = amountOftickets; // int == 12;
// Update GUI on GUI thread
mainWindow.Dispatcher.Invoke(() =>
{
mainWindow.SetIconsAccordingtoConfig();
mainWindow.NumberTicketsCreated.Content = config.Default.Lastweekstickets.ToString(); // int == 0!!
});
Does anyone have an Idea on how to shove the running configuration from the thread who updated it to the Gui thread?
After a quick look in the documentation, it seems that you have 2 options:
You can either use the Dispatcher.Invoke to set your config.Default
on GUI thread.
It seems that .NET Core and .NET Framework support synchronized SettingsBase which is what the Configs are ( https://learn.microsoft.com/en-us/dotnet/api/system.configuration.settingsbase.synchronized?view=netcore-3.0 )
EDIT 1:
Looking into the C# Properties more, it seems that if you look in your config file in Visual Studio under:
Properties -> Config.settings -> Config.Designer.cs
During the class initialization it seems that .NET Framework, already uses Synhronized property which should make the config thread safe
private static Config defaultInstance = ((Config)(global::System.Configuration.ApplicationSettingsBase.Synchronized(new Config())));
It could be that when you created this file, Synchronized wasn't used, so your config file isn't thread safe, especially if it was created in code and not designer/properties
From Microsoft Docs on SettingsBase Synchronized:
The indexer will get and set property data in a thread-safe manner if the IsSynchronized property is set to true. A SettingsBase instance by default is not thread-safe. However, you can call Synchronized passing in a SettingsBase instance to make the SettingsBase indexer operate in a thread-safe manner.

How to perform a catalog update correctly in FDT 1.x?

I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?
Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.

WPF Click-Once deployment promoting from stage to production

I've a WPF Application that actually uses a Web server for downloading the app and execute it on the client... I've also created a staging enviorment for that application when I put the release as soon as new features are added / bug fixed.
I've not found a reasonable way of promoting from staging to production since the app.config is hashed... so I can't change my pointments (DB/Services) editing it...
My actual way is publishing for staging, increasing of 1 the publish version and publishing for production...but this is quite frustrating.... since I've to do twice the work...any sugeestion?
Thanks
Our team encountered the same situation a year ago. We've solved the situation by following this steps:
Determine the latest ClickOnce application version;
Removing the *.deploy extensions;
Making the necessary *.config file changes;
Updating the manifest file (*.manifest) by using 'Mage.exe' and your certificate (see also: MSDN);
Update the deployment manifest (*.application) in the application version directory and in the root directory, again by using 'Mage.exe';
Adding back the *.deploy extensions.
Hereby a short code sample for calling Mage, really not that complicated though.
// Compose the arguments to start the Mage tool.
string arguments = string.Format(
#"-update ""{0}"" -appmanifest ""{1}"" -certfile ""{2}""",
deploymentManifestFile.FullName,
applicationManifestFile.FullName,
_certificateFile);
// Add password to the list of arguments if necessary.
arguments += !string.IsNullOrEmpty(_certificateFilePassword) ? string.Format(" -pwd {0}", _certificateFilePassword) : null;
// Start the Mage process and wait it out.
ProcessStartInfo startInfo = new ProcessStartInfo(_mageToolPath, arguments);
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
startInfo.RedirectStandardOutput = true;
Process mageProcess = Process.Start(startInfo);
mageProcess.WaitForExit();
// Show all output of the Mage tool to the current console.
string output = mageProcess.StandardOutput.ReadToEnd();
// Determine the update of the manifest was a success.
bool isSuccesfullyConfigured = output.ToLower().Contains("successfully signed");

Issue with getting database via Sitecore API

We noticed a slight oddity in the Sitecore API code. The code is below for your reference. The code is trying to get a database by doing new Database(database). But randomly it was failing.
This code worked for a while with Database db = new Database(database); but started failing randomly yesterday. When we changed the code to Database db = Database.GetDatabase(database);, the code started working again. What is the difference between the two approaches and what is recommended by Sitecore?
I've seen this happen twice now - multiple times in production and a couple of times in my development environment.
public static void DeleteItem(string id, stringdatabase)
{
//get the database
Database db = new Database(database);
//get the item
item = db.GetItem(new ID(id));
if (item != null)
{
using(new Sitecore.SecurityModel.SecurityDisabler())|
{
//delete the item
item.Delete();
}
}
}
A common way you will see people get a specific database is:
Sitecore.Data.Database master = Sitecore.Configuration.Factory.GetDatabase("master");
This is equivalent to Sitecore.Data.Database.GetDatabase("master").
When you call either of these methods it will first check the cache for the database. If not found it will build up the database with all of the configuration values within the config file via reflection. Once the database is created it will be placed in the cache for future use.
When you use the constructor on the database it is simply creating a rather empty database object. I am rather suprised to hear it was working at all when you used this method.
The proper approach to get a specific database would be to use:
Sitecore.Configuration.Factory.GetDatabase("master");
// or
Sitecore.Data.Database.GetDatabase("master");
If you are looking to get the database used with the current request (aka context database) you can use Sitecore.Context.Database. You can also use Sitecore.Context.ContentDatabase.

Resources