I have a CSV file of the form (unimportant columns hidden)
player,game1,game2,game3,game4,game5,game6,game7,game8
Example data:
Alice,0,-10,-30,-60,-30,-50,-10,30
Bob,10,20,30,40,50,60,70,80
Charlie,20,0,20,0,20,0,20,0
Derek,1,2,3,4,5,6,7,8
Emily,-40,-30,-20,-10,10,20,30,40
Francine,1,4,9,16,25,36,49,64
Gina,0,0,0,0,0,0,0,0
Hank,-50,50,-50,50,-50,50,-50,50
Irene,-20,-20,-20,50,50,-20,-20,-20
I am looking for a way to make a Data Studio view where I can see a chart of all the results of a certain player. How would I make a custom field that combines the data from game1 to game8 so I can make a chart of it?
| Name | Scores |
|----------|---------------------------------|
| Alice | [0,-10,-30,-60,-30,-50,-10,30] |
| Bob | [10,20,30,40,50,60,70,80] |
| Charlie | [20,0,20,0,20,0,20,0] |
| Derek | [1,2,3,4,5,6,7,8] |
| Emily | [-40,-30,-20,-10,10,20,30,40] |
| Francine | [1,4,9,16,25,36,49,64] |
| Gina | [0,0,0,0,0,0,0,0] |
| Hank | [-50,50,-50,50,-50,50,-50,50] |
| Irene | [-20,-20,-20,50,50,-20,-20,-20] |
The goal of the resulting chart would be something like this, where game1 is the first point and so on.
If this is not possible, how would I best represent the data so what I am looking for can work in Data Studio? I currently have it implemented in a Google Sheet, but the issue is there's no way to make views, so when someone selects a row it changes for everyone viewing it.
If you have two file games as data sources, I guess that you want to combine them by the name, right?
You can do it with the blending data option. Resource > manage blends I think is the option.
Then you can create a blend data source merging it by the name.
You can add also both score fields, with different labels.
This is some documentation about it: https://support.google.com/datastudio/answer/9061420?hl=en
We have a solr configuration based on apache solr 8.52.
We use the installation from the TYPO3 extension ext:solr 10.0.3.
In this way we have multiple (39) languages and multiple cores.
As we do not need most of the languages (for sure we need one, maybe two further) I tried to remove most of them with deleting (moving to another folder) all the configurations I identified as other languages, leaving only these folders and files in the solr folders:
server/
+-solr/
| +-configsets/
| | +-ext_solr_10_0_0/
| | +-conf/
| | | +-english/
| | | +-_schema_analysis_stopwords_english.json
| | | +-admin-extra.html
| | | :
| | | +-solrconfig.xml
| | +-typo3lib
| | +-solr-typo3-plugin-4.0.0.jar
| +cores/
| | +-english/
| | +-core.properties
| +-data/
| | +-english/
: : :
I thought that after restarting the server it would only present one language and one core. This was correct.
But on start it noted all the other languages as missing like:
core_es: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core core_es: Error loading schema resource spanish/schema.xml
Where does solr get this information about all these languages I don't need?
How can I avoid this long list of warnings?
First of all, it does not hurt to have those cores. As long as they are empty and not loaded, they do not take much RAM and CPU.
But if you still want to get rid of them, you need to do it correctly. If you just move core's data directory, this does not mean it is deleted because solr server also needs to adjust config files. Best way is to use curl like this:
curl 'http://localhost:8983/solr/admin/cores?action=UNLOAD&core=core_en&deleteInstanceDir=true'
That would remove the core and all its data.
Right Way - It's working
Wrong Way - Isn't working how should be
I'd like your help about an issue. I'm using wds and so I created a collection that was uploaded by several pieces of a manual. Once I did it, on the conversation service I also created, I put some descriptions on the intentions that the Discovery should uses. Now, when I try to identify these descriptions on the Discovery Service, unless I write exactly the same to test, it's not recognizing. Any suggestion about what can I use to fix it?
e.g. I uploaded a metadata txt file with the following fields:
+---------------------+------------+-------------+-----------------------+---------+------+
| Document | DocumentID | Chapter | Session | Title | Page |
+---------------------+------------+-------------+-----------------------+---------+------+
| Instructions Manual | BR_1 | Maintenance | Long Period of Disuse | Chassis | 237 |
+---------------------+------------+-------------+-----------------------+---------+------+
Now, when I search on the Discovery, I need to use the exactly word I put on the intention's description (Chassis). Otherwise the Discovery it's not getting through the way below:
metadata.Title:chas*|metadata.Chapter:chas*|metadata.Session:chas*
Any idea??
Please check the syntax if its right or wrong by matching it with discovery tool.
Sometimes we need inverted commas with backslash.
I have used Lehigh University Benchmark (LUBM) to test my application.
What I know about LUBM is that its ontology contains 43 classes.
But when I query over the classes I got 14 classes!
Also, when I used Sesame workbench and check the "Types in Repository " section I got 14th classes which are:
AssistantProfessor
AssociateProfessor
Course
Department
Fullprofessor
GraduateCourse
GraduateStudent
Lecturer
Publication
ResearchAssistant
ResearchGroup
TeachingAssistant
UndergraduateStudent
University
Could any one explain to me the differences between them?
Edit: Problem partially solved but now How can I retrieve RDF instances from the upper level of Ontology (e.g. Employee, book, Article, Chair, college, Director, PostDoc, JournalArticle ..etc) or let's say all 43 classes because I can just retrieve instances for the lower classes (14th classes) and the following picture for retrieving the instances from ub:Department
You didn't mention what data you're using, so we can't be sure that you're actually using the correct data, or even know what version of it you're using. The OWL ontology can be downloaded from the Lehigh University Benchmark (LUBM), where the OWL version of the ontology is univ-bench.owl.
Based on that data, you can use a query like this to find out how many OWL classes there are::
prefix owl: <http://www.w3.org/2002/07/owl#>
select (count(?class) as ?numClasses) where { ?class a owl:Class }
--------------
| numClasses |
==============
| 43 |
--------------
I'm not familiar with the Sesame workbench, so I'm not sure how it's counting types, but it's easy to see that different ways of counting types can lead to different results. For instance, if we only count the types of which there are instances, we only get six classes (and they're the OWL meta-classes, so this isn't particularly useful):
select distinct ?class where { ?x a ?class }
--------------------------
| class |
==========================
| owl:Class |
| owl:TransitiveProperty |
| owl:ObjectProperty |
| owl:Ontology |
| owl:DatatypeProperty |
| owl:Restriction |
--------------------------
Now, that's what happens if you're just querying on the ontology itself. The ontology only provides the definitions of the vocabulary that you might use to describe some actual situation. But where can you get descriptions of actual (or fictitious) situations? Note that at SWAT Projects - the Lehigh University Benchmark (LUBM) there's a link below the Ontology download:
Data Generator(UBA):
This tool generates syntetic OWL or DAML+OIL data
over the Univ-Bench ontology in the unit of a university. These data
are repeatable and customizable, by allowing user to specify seed for
random number generation, the number of universities, and the starting
index of the universities.
* What do the data look like?
If you follow the "what do the data look like" link, you'll get another link to an actual sample file,
http://swat.cse.lehigh.edu/projects/lubm/University0_0.owl
That actually has some data in it. You can run a query like the following at sparql.org's query processor and get some useful results:
select ?individual ?class
from <http://swat.cse.lehigh.edu/projects/lubm/University0_0.owl>
where {
?individual a ?class
}
-------------------------------------------------------------------------------------------------------------------------------------------------------------
| individual | class |
=============================================================================================================================================================
| <http://www.Department0.University0.edu/AssociateProfessor9> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#AssociateProfessor> |
| <http://www.Department0.University0.edu/GraduateStudent127> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#GraduateStudent> |
| <http://www.Department0.University0.edu/UndergraduateStudent98> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
| <http://www.Department0.University0.edu/UndergraduateStudent182> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
| <http://www.Department0.University0.edu/GraduateStudent1> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#TeachingAssistant> |
| <http://www.Department0.University0.edu/AssistantProfessor4/Publication4> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#Publication> |
| <http://www.Department0.University0.edu/UndergraduateStudent271> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
| <http://www.Department0.University0.edu/UndergraduateStudent499> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
| <http://www.Department0.University0.edu/UndergraduateStudent502> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
| <http://www.Department0.University0.edu/GraduateCourse61> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#GraduateCourse> |
| <http://www.Department0.University0.edu/AssociateProfessor10> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#AssociateProfessor> |
| <http://www.Department0.University0.edu/UndergraduateStudent404> | <http://www.lehigh.edu/~zhp2/2004/0401/univ-bench.owl#UndergraduateStudent> |
…
I think that to get the kind of results you're looking for, you need to download this data, or download a version of the UBA test data generators and generate some of your own data.
If you use UBA (the LUBM data generator), you get instance data, where instances are declared to be of certain types. E.g.
<http://www.Department0.University31.edu/FullProfessor4>
rdf:type
ub:FullProfessor
It turns out, when you run UBA, it only asserts instances into the 14 classes you mention.
The LUBM ontology actually defines 43 classes. These are classes that are available to be used in instance data sets.
In OpenRDF Sesame, when it lists "Types in Repository", it is apparently just showing those types which are actually "used" in the data. (I.e., there is at least 1 instance asserted to be of that type/class in the data.)
That is the difference between your two lists. When you look at the ontology, there are 43 classes defined (available for use), but when you look at actual instance data generated by LUBM UBA, only those 14 classes are directly used.
(NOTE: If you had a triple store with OWL reasoning turned on, the reasoner would assert the instances into more of the classes defined in the ontology.)
I'm curious what the difference between the token "Trusted_Connection" and "Integrated Security" in SQL Server connection strings (I believe other databases/drivers don't support these). I understand that they are equivilent.
They are synonyms for each other and can be used interchangeably.
In .Net, there is a class called SqlConnectionStringBuilder that is very useful for dealing with SQL Server connection strings using type-safe properties to build up parts of the string. This class keeps an internal list of synonyms so it can map from one value to another:
+----------------------+-------------------------+
| Value | Synonym |
+----------------------+-------------------------+
| app | application name |
| async | asynchronous processing |
| extended properties | attachdbfilename |
| initial file name | attachdbfilename |
| connection timeout | connect timeout |
| timeout | connect timeout |
| language | current language |
| addr | data source |
| address | data source |
| network address | data source |
| server | data source |
| database | initial catalog |
| trusted_connection | integrated security |
| connection lifetime | load balance timeout |
| net | network library |
| network | network library |
| pwd | password |
| persistsecurityinfo | persist security info |
| uid | user id |
| user | user id |
| wsid | workstation id |
+----------------------+-------------------------+
(Compiled with help from Reflector)
There are other similar classes for dealing with ODBC and OleDb connection strings, but unfortunately nothing for other database vendors - I would assume the onus is on a vendor's library to provide such an implementation.
They are the same.
Unfortunately, there are several variations like this, including:
Server/Data Source
Database/Initial Catalog
I'm not sure of the origins of the variations, I assume some are meant to be generic (not database-centric so your connection string would look very similar if connecting to a RDBMS vs connecting to a directory service, etc.)
So a little bit later I discovered the origins of the name clash. A set of tokens were used by ODBC and a different set defined for OLEDB. For Sql Server for legacy reasons they still support both interchangeably.
Trusted_Connection=true is ODBC and Integrated Security=SSPI was OLEDB.
In my case I have discovered a difference between "Trusted_Connection" and "Integrated Security". I am using Microsoft SQL Server 2005. Originally I used Windows logon (Integrated Security=SSPI). But when I replaced the Windows authentification by SQL Server authentification adding User ID and password, replacing SSPI by "False" failed. It returned a "Multiple-step OLE DB operation generated error". However, when I replaced "Integrated Security=False" by "Trusted_Connection=no", it worked.