I'm using Drupal 7. After enabling the Statistics module, I see, under each node, how many times it has been read (e.g. "4 reads").
I need to knew where this views (e.g. "4 reads") save in table in database ?
I need to know where is saved to using it in my SQL
This data is stored in database table node_counter, field totalcount.
You can use function statistics_get() to get the total number of times the node has been viewed.
Example:
// the node nid
$nid = 1;
// get the node statistics
$node_stats = statistics_get($nid);
// get the count of the node reads
$node_reads = $node_stats['totalcount'];
Or, if you need to access it directly with SQL code,
SELECT totalcount FROM node_counter WHERE nid = 1;
Related
To give a simplified example:
I have a database with one table: names, which has 1 million records each containing a common boy or girl's name, and more added every day.
I have an application server that takes as input an http request from parents using my website 'Name Chooser' . With each request, I need to pick up a name from the db and return it, and then NOT give that name to another parent. The server is concurrent so can handle a high volume of requests, and yet have to respect "unique name per request" and still be high available.
What are the major components and strategies for an architecture of this use case?
From what I understand, you have two operations: Adding a name and Choosing a name.
I have couple of questions:
Qustion 1: Do parents choose names only or do they also add names?
Question 2 If they add names, doest that mean that when a name is added it should also be marked as already chosen?
Assuming that you don't want to make all name selection requests to wait for one another (by locking of queueing them):
One solution to resolve concurrency in case of choosing a name only is to use Optimistic offline lock.
The most common implementation to this is to add a version field to your table and increment this version when you mark a name as chosen. You will need DB support for this, but most databases offer a mechanism for this. MongoDB adds a version field to the documents by default. For a RDBMS (like SQL) you have to add this field yourself.
You havent specified what technology you are using, so I will give an example using pseudo code for an SQL DB. For MongoDB you can check how the DB makes these checks for you.
NameRecord {
id,
name,
parentID,
version,
isChosen,
function chooseForParent(parentID) {
if(this.isChosen){
throw Error/Exception;
}
this.parentID = parentID
this.isChosen = true;
this.version++;
}
}
NameRecordRepository {
function getByName(name) { ... }
function save(record) {
var oldVersion = record.version - 1;
var query = "UPDATE records SET .....
WHERE id = {record.id} AND version = {oldVersion}";
var rowsCount = db.execute(query);
if(rowsCount == 0) {
throw ConcurrencyViolation
}
}
}
// somewhere else in an object or module or whatever...
function chooseName(parentID, name) {
var record = NameRecordRepository.getByName(name);
record.chooseForParent(parentID);
NameRecordRepository.save(record);
}
Before whis object is saved to the DB a version comparison must be performed. SQL provides a way to execute a query based on some condition and return the row count of affected rows. In our case we check if the version in the Database is the same as the old one before update. If it's not, that means that someone else has updated the record.
In this simple case you can even remove the version field and use the isChosen flag in your SQL query like this:
var query = "UPDATE records SET .....
WHERE id = {record.id} AND isChosend = false";
When adding a new name to the database you will need a Unique constrant that will solve concurrenty issues.
I was wondering if it's possible to create a numeric count index where the first document would be 1 and as new documents are inserted the count would increase. If possible are you also able to apply it to documents imported via mongoimport? I have created and index via db.collection.createIndex( {index : 1} ) but it doesn't seem to be applying.
I would strongly recommend using ObjectId as your _id field. This has the benefit of being a good value for distributed systems, but also based on the date it was created. It also has a built-in index inside MongoDB.
Example using Morphia:
Date d = ...;
QueryImpl<MyClass> query = datastore.createQuery(MyClass);
query.field("_id").greaterThanOrEq(new ObjectId(d));
query.sort("_id");
query.limit(100);
List<MyClass> myDocs = query.asList();
This would fetch all documents created since date d in order of creation.
To load the next batch, change to:
query.field("_id").greaterThan(lastDoc.getId());
This will very efficiently load the next batch based on the ID of the last document from the previous batch.
I am connecting to a universe database (from rocket software) using their .net driver. I would like to fetch data on demand on user request per page i.e. do pagination. With other databases we could use (offset fetch) but universe db doesn't seem to support it. It does not recognize keyword offset, something like
SELECT NAME, AGE FROM CONTACTS WHERE AGE > 25 offset 5 sample 5 does not work. I does not recognize those keywords and there is no good documentation :-(
Note: Although it is traditionally a multi-value database, the one I am using does not use multi value types but the structure is normalized.
This is certainly one of the shortcomings of this platform. I have worked through this in the past with the something similar to the following subroutine. I had to remove a bunch of stuff for brevity but this compiles so it must work completely bug free, right?
Caveats: You need to have #SELECT DICT item in each file you want to use this with containing all of the columns you want to return.
Multivalues get a little tricky. I had flattened the data I was using this with so I did not run into that problem, but this does not do UNNESTs.
Also you might want to add a value saying how many records there are total and possibly work out some kind of token passing and list saving to cut down on executing the query each time you run it but that gets much, much deeper than the basic question at hand.
SUBROUTINE SQLSelectWithOffset(TableName,UVWithClause,Starting,Offset)
***********************************************************************
* PROGRAM ID: SQLSelectWithOffset
*
* PROGRAM TITLE: SQLSelectWithOffset
*
* DESCRIPTION: Universe doesn't support sql commands using starting and offset
* which makes life hard when you want all of a file
* but you choke on the size. Tokens allow for the selectlist to be saved
* TableName = UV FIle to select on. If this is blank program will return the number of records remaining
* UVWithClause = Your critera, WITH or BY criteria you want in a sort select.
* Starting = Holds you place in line
* Offest = How many records to return
************************************************************************
$INCLUDE UNIVERSE.INCLUDE ODBC.H
RETURN.LIST = ""
IF Starting = "" or Starting < 1 THEN
Starting = 1
END
GOSUB GET.MASTER.LIST
FOR X=Starting TO Offset
ID = EXTRACT(FULL.LIST,X,0,0)
IF ID = "" THEN CONTINUE
RETURN.LIST<-1> = ID
NEXT X
SELECT RETURN.LIST TO 9
SQLSTMT ="SELECT * FROM ":TableName:" SLIST 9"
ST=SQLExecDirect(#HSTMT, SQLSTMT)
RETURN
GET.MASTER.LIST:
STMT = "SSELECT ":TableName
IF UVWithClause NE "" THEN
STMT := " ":UVWithClause
END
EXECUTE "CLEARSELECT"
EXECUTE STMT
READLIST FULL.LIST ELSE FULL.LIST = ""
RETURN
END
Good luck, please only use this information for good!
I exported two tables named Keys and Acc tables as CSV files from SQL Server and imported them successfully to Neo4J by using the commands below.
CREATE INDEX ON :Keys(IdKey)
USING PERIODIC COMMIT 500
LOAD CSV FROM 'file:///C:/Keys.txt' AS line
MERGE (k:Keys { IdKey: line[0] })
SET k.KeyNam=line[1], k.KeyLib=line[2], k.KeyTyp=line[3], k.KeySubTyp=line[4]
USING PERIODIC COMMIT 500
LOAD CSV FROM 'file:///C:/Acc.txt' AS line
MERGE (callerObject:Keys { IdKey : line[0] })
MERGE (calledObject:Keys { IdKey : line[1] })
MERGE (callerObject)-[rc:CALLS]->(calledObject)
SET rc.AccKnd=line[2], rc.Prop=line[3]
Keys stands for the source code objects, Acc stands for relations among them. I imported these two tables three times for three different application projects. So to maintain IdKey property being unique for three applications, I concatenated a five character prefix to IdKey to identify the Object for Application while exporting from sql server because we can not create index based on multiple fields as I learnt from manuals. Now my aim is constructing the relations among applications. For example:
Node1 is a source code object of Application1
Node2 is another source code object of Application1
Node3 is a source code object of Application2
There is already a CALL relation created from Node1 to Node2 because of the record in Acc already imported.
The Name of the Node2 is equal to name of Node3. So we can say that Node2 and Node3 are in fact the same source codes. So we should create a relation from Node1 to Node3. To realize it, I wrote a command below. But I want to be sure that it is correct. Because I do not know how long it will execute.
MATCH (caller:Keys)-[rel:CALLS]->(called:Keys),(calledNew:Keys)
WHERE calledNew.KeyNam = called.KeyNam
and calledNew.IdKey <> called.IdKey
CREATE (caller)-[:CALLS]->(calledNew)
This following query should be efficient, assuming you also create an index on :Keys(KeyNam).
MATCH (caller:Keys)-[rel:CALLS]->(called:Keys)
WITH caller, COLLECT(called.KeyNam) AS names
MATCH (calledNew:Keys)
WHERE calledNew.KeyNam IN names AND NOT (caller)-[:CALLS]->(calledNew)
CREATE (caller)-[:CALLS]->(calledNew)
Cypher will not use an index when doing comparisons directly between property values. So this query puts all the called names for each caller into a names collection, and then does a comparison between calledNew.KeyNam and the items in that collection. This causes the index to be used, and will speed up the identification of potential duplicate called nodes.
This query also does a NOT (caller)-[:CALLS]->(calledNew) check, to avoid creating duplicate relationships between the same nodes.
Root contains one folder, named pending of type sling:folder.
That have numbers of nodes of nt:unstructured type, having name of long value, that long value is very important for my code processing.
Now I want to get top 20 nodes(20 minimum node name , i.e., long value) data from this pending folder.
Can you tell me how can I write the JCR query for this situation ?
Edit No. 1
Repository repository = JcrUtils.getRepository("http://localhost:4502/crx/server");
Session session = repository.login(new SimpleCredentials("admin", "admin".toCharArray()));
// Obtain the query manager for the session via the workspace ...
QueryManager queryManager = session.getWorkspace().getQueryManager();
// Create a query object ...
String expression = "SELECT * FROM [nt:base] AS s WHERE ISDESCENDANTNODE([/pending])";
Query query = queryManager.createQuery(expression, javax.jcr.query.Query.JCR_SQL2);
// Execute the query and get the results ...
QueryResult result = query.execute();
// Iterate over the nodes in the results ...
NodeIterator nodeIter = result.getNodes();
But it gives some order , different than the order present in root node. But that is not in sort form.
Edit No.2
Now I got the functionality of this function. And it working fine now. The thing that I got is order the node just above the destination node, that is mentioned in second parameter of this function.
But the nodes that is coming is of different names(a number). So how can I sort this using orderBefore. Because everytime we are not able to know the right location(destination Relative Path) where we have to put this node.
You probably don't need a query for this, if you have structure such as
/pending/1
/pending/2
...
/pending/999
you can just iterate over the nodes using the JCR Node's getNodes() method, which returns a NodeIterator.
A sling:orderedFolder node type for "pending" gives a predictable ordering of the child nodes.
In general, using the tree structure instead of queries is more efficient in JCR.
Note also that if you're using Jackrabbit having more than about 10'000 child nodes on the same parent can lead to performance issues.