I installed two plugins in neo4j desktop. I try do operated with the methods then. But I did not worked. Have anybody an idea why?
Make sure to restart your database, and you can check eg. with return apoc.version() or return gds.version() if the plugins are installed.
For APOC you can use call apoc.help("") to see the installed/available procedures and functions.
Generally you can use SHOW PROCEDURES and SHOW FUNCTIONS.
Related
My company has a folder called tools... which has about 50 some CLI Tools our support agents use for various troubleshooting and reporting...
Company is getting bigger... giving every rep access to our source code just so they can run the tools is not ideal... Plus things like npm package dependencies happen and it's more maintenance than they want.
Ideally, I would create an internal only website that simply presents a dropdown of all the tools in the /tools folder. The webserver (like Express) would execute the scripts and then redirect the standard output to the screen... The kicker is I need to allow for standard input as well since the tools are somewhat interactive... they get to select choices.
I'm sure there are all kinds of security issues with this and I just want emphasize this would be for internal use only and run by trusted users.
I've seen various terminal emulators and projects like this but looked complicated to make it work for our use case. I really just want to let people run a preset number of commands... I feel like this type of thing should exist and I just haven't stumbled upon it yet.
Alternatively... I've considered refactoring the tools to use something like swagger which would present the options for them to fill out but that too isn't ideal as we have conditional prompts...
You could try to use xterm.js to create browser based terminal that can execute the CLI tools.
You could use socket.io and build a node.js app for specific required commands.
socket.io allows for client/server communications on webpage.
node.js allows for a framework where you can pass commands through.
By the end of last week our central IT Department introduced SCCM and applied it to a bunch of clients in our division. My colleagues and I work as so called "IT-Partner" in a 1st level support for a few hundrets of colleagues. Now we're facing some problems with our new SCCM System (installed packages do not work etc.) Now we'd like to "reset" applications so the SCCM Agend will reinstall them. I've read something about the detection methods but unfortunatelly I do not really know how they work nor I know where those methods are saved. I want to "analyse" those methods so I know which file to modify / delete that the agent will reinstall the application.
By the way, how much time does SCCM take from "assigning" a package to applying to the client?
Assuming you only have the client and no access to the SCCM Console the detection methods can be found using WMI. They are stored in root\ccm\CIModels in the Class Local_Detect_Synclet.
The format is XML in one column and it is designed so that all kinds of detection methods can basically be represented in the same style so it's not very readable but you should be able to get some basic understanding about the detection method used.
Keep in mind this is only true if the software was deployed in the "new" (introduced in sccm 2012) application format and not for the "old" package/program format.
If you want more detail I once tried to automate the process of triggering a reinstall for any given application but ultimately failed due to problems with the chache/distribution point. I posted all my findings here.
So from an application POV. When you deploy an app the detection method is setup in SCCM to determine wether or not the application installed successfully. This detection method could be configured a variety of ways. For example, it could check to see if the msi code is installed to determine success, it could check the .exe and compare it to a specific version, or even check a registry file for existence. In order to change/modify these detection methods you should be an SCCM admin and be able to login to the console. From there you would select the specific application or package you want to analyze and click through the properties of the deployment.
I am working on Adobe CQ. I created 2-3 versions(1.2,1.2,1.3) for a particular page in my author instance. Now I tried to package my content page and installed it in another instance. I couldn't see the versions of the page which I installed in another instance.
Can anyone help me out doing this?? I want to migrate my content pages along with their versions from one CQ instance to another??
We are in the same situation. You can extract prior version details using the packaging approach, but you will be precluded from reloading them in due to the new Oak security model. The next issue is that you would need to extract and transform the data, and then reinsert due to the node ID's potentially differing, especially if you are using partial data sets to extract.
Where we have gotten to, and are proving now, is to use the new migration tool to move content from instance to instance, which purportedly has a version extract tool. I will update details here when we get our results back.
UPDATE:
We have tested the CRX2OAK migration tool, and it indeed does move versions across. Using the tool, you can specify filters to only migrate a subset of content, which will then drag the version details across as well.
It seems this approach works quite well for both single tenancy and multi tenancy approaches as it used to using a package for content.
Unfortunately, it can't be used as a portable backup system, as it is an instance to instance solution. It does, however, work well for blue/green deployment strategies.
Versions are stored by path '/jcr:system/jcr:versionStorage' in AEM.
To transfer pages with their versions just create a package with filters for content which you want to move and the version storage path as well, download package and install in other AEM.
If anyone comes across this question like me, here is the summarised answer:
You can use crx2oak utility available from link below to migrate pages and page version across instances:
https://repo.adobe.com/nexus/content/groups/public/com/adobe/granite/crx2oak/
This is a powerful utility with multiple uses (especially in upgrades) as documented in links below:
https://docs.adobe.com/docs/en/aem/6-2/deploy/upgrade/using-crx2oak.html
https://jackrabbit.apache.org/oak/docs/migration.html
The source and destination repositories need to be offline while running this utility so best to plan ahead for this type of migration.
HTH
I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.
I recently back up my local Postgresql database and imported(restored) into Heroku's SHARED_DATABASE.
heroku pgbackups:restore SHARED_DATABASE 'url_to_pg_dump'
Everything seems to work except the function, which is not being created during import. I verified the nonexistence via heroku console.
I wonder if this is a limitation of heroku's SHARED_DATABASE or I messed some setting during the process.
I would like to know from anyone who had experience with this.
Thanks in advance.
The current shared databases don't support user-defined functions, however, the new in-beta ones do - I have numerous triggers etc written in plpgsql running on there.
More info here: http://devcenter.heroku.com/articles/labs-heroku-shared-postgresql
The shared databases don't support user-defined functions:
In addition, the dedicated databases offer a number of advantages, including direct access (via psql or any native postgresql library), stored procedures, and PostgreSQL 9 support.
Emphasis mine. So they don't explicitly say that shared databases do not support stored procedures but they do note it as an extra feature of dedicated databases so the result is the same.
Thanks, Mu and Neil.
I worked with support, who pointed me to the new public beta. This looks to be next version of shared_db and I tried it and the import worked fine and postgresql function objects were restored in good order.
http://addons.heroku.com/heroku-shared-postgresql