List all property indices in agens-graph - agens-graph

I am evaluating Agensgraph v1.2 for usage in my company. I am trying to use normal postgres shell commands in agens shell to get information about the indices, tables etc. I am not able to list indices with \di command.
Is there a way to list all the property indices that are currently in the db?
It will also be helpful if I can get a link to some documentation of the system tables used by Agensgraph, which I can query if the shell commands are not yet fully functional for the graph side.

I was able to list the indices using the describe graph vertices command (\dGv).

Related

Should I have to sumit jobs to spark or I can run them from client lib?

So I'm learning about Spark and I have a question about how client libs works.
My goal is to do some sort of data analysis in Spark, telling it where are the data sources (databases, cvs, etc) to process, and store results in hdfs, s3 or any kind of database like MariaDB or MongoDB.
I though about having a service (API application) that "tells" spark what I want to do. The question is: Is it enough setting the master configuration with spark:remote-host:7077 at context creation or should I send the application to spark with some sort of spark-submit command?
This completely depends on how your environment is set up, if all paths are linked to your account you should be able to run one of the two commands, to efficiently open the shell and run test commands. The reason to have a shell, is this will allow you to dynamically run commands and validate/learn how to run/tether commands onto one another and see what results come out.
Scala
spark-shell
Python
pyspark
Inside of the environment, if everything is linked to Hive tables you can check the tables by running
spark.sql("show tables").show(100,false)
The above command will run a "show tables" on the Spark-Hive-Metastore Catalogue and will return all active tables you can see (doesn't mean you can access the underlying data). The 100 means I am going to look at 100 rows and the false means to show the full string not the first N many characters.
In a mythical example if one of the tables you see is called Input_Table you can bring it into the environmrnt with the below commands
val inputDF = spark.sql("select * from Input_Table")
inputDF.count
I would heavily advise, while your learning, not to run the commands via Spark-Submit, because you will need to pass through the Class and Jar, forcing you to edit/rebuild for each testing making it difficult to figure how commands will run without have a lot of down time.

Is there a way to create migration script for a single or a selected group of objects with SqlPackage?

I'm trying to migrate specific objects from one database to another using sqlpackage.exe /action:Extract and sqlpackage.exe /action:Script. Currently I'm creating the script and filtering the unneeded objects manually, I would like to be able to exclude them all together and to automate the process. So far I didn't find in the documentation any option that does it.Thanks.
There is no way to remove single objects with native functionality. Natively you can remove only specific object types.
You can write your own deployment contributor and then skip whatever objects you need. Here is an example here.
Check Ed Elliot's ready to use contributor with bunch of configuration options (I haven't used it for a while and do not know how does it work with the new versions of SQL Server).
Additionally, in Ed Elliot's blog you can find a lot of useful information.

How to query for LDAP (Active Directory) deleted objects since a given time?

I need to query for incremental changes from an Active Directory forest using LDAP.
The easy part is to query for incremental updates of objects, and for creation of new objects. For this you can use the whenChanged property
Example:
(&(objectClass=user)(whenChanged>=20180501000000.0Z))
So far, so good.
But what about querying for deleted records. Is there some way to query LDAP for all items deleted since a given time?
I do know about the fact that Active Directory marks objects for deletion (doesn't actually delete stuff). And I know there is some way to get deleted objects: (See this msdn post)
But I haven't had much luck creating an LDAP query that, against a very vanilla active directory server, can get a list of deleted accounts.
Related: LDAP query for deleted users
I tried that suggestion too:
(&(isDeleted=TRUE)(userAccountControl:1.2.840.113556.1.4.803:=512))
Still Nothing.
How can I make this work?
What programming language are you using to make the query? It seems to be an LDAP Extended Control (specifically LDAP_SERVER_SHOW_DELETED_OID) that needs to be enabled as part of the search properties, and not in the LDAP query string itself. So it depends on the implementation of how you're searching.
For example, in .NET, the DirectorySearcher class has a Tombstone property that will enable this.
Or PowerShell's Get-ADObject command has -IncludeDeletedObjects.

How to pipe the complete graph to Giraph through TinkerPop 3 stack?

I've a graph with different types of nodes & relationships. Each type of node has 3-4 properties. For testing purpose on HDFS, I'm using GraphSON file to store this graph. Now I want to analyse this graph using Giraph. I've explore Giraph's IO classes & also found that Gremlin can directly load GraphSON. So could you please explain me how to load the graph into Giraph using TinkerPop stack?
See the Giraph sample in the docs, it does almost exactly what you're looking for. Instead of hadoop-gryo.properties use hadoop-graphson.properties (and of course adjust the input location setting, gremlin.hadoop.inputLocation, in the configuration file).

Are most LDAP administrators creating LDIFs by hand?

Are there tools that make the job easier? If command-line only tools exist, then can anyone speculate if there is a market for a GUI tool? For example, you can create a relational database by modeling visually. Should the same notion exist for LDAP?
Apache Directory Studio includes an ldif-Editor. It is still a text editor but with syntax highlighting, autocompletion and group collapsing for ldif files:
http://directory.apache.org/studio/
I don't know if there are any tools but it isn't that hard to create them by hand.
If you are using IPlanet LDAP then they had a nice interface for creating and modifying schemas though. :)
I don't know if you would consider that to be by hand otherwise that is one tool to use.
I've done some LDIF handling using Perl and the Net::LDAP::LDIF module and it made scripting custom LDAP conversions very easy.
Have you looked at the command-line tool, LDIFDE.exe? Should be on your domain controller.
Business people give me Excel spreadsheets with inconsistent formatting of user and group data and want it loaded right away (then they come back with a new version and tell me they've only added some new users, but some are missing, some data is now invalid, there's a missing column etc.) They want unique passwords assigned, group memberships set up based on department id fields, and so forth.
Then they come back two weeks later and want to know about the differences between that spreadsheet and one from six months ago. Sigh.
I generally just do it all with a few hand-crafted Python scripts.
A lot of times you may be copying objects from one tree to another. Or backing them up. In that case, most LDAP tools have some way of exporting as LDIF. Then you can easily modify the files as needed.
Or copy examples to reuse.
I have seen a number of tools that will do tasks and output the results as LDIF, which can be handy, but they are basically point usage tools.

Resources