i made a simple chat, and i want to make when there is total 5 messages (id's) the chat will start the timer (5 seconds) and will clear all rows. This is what i've made.
I have this to check if we reached 5 messages (id's)
if($handler->lastInsertId() == 5){
clearchat_time();
}
I used "lastInsertId()" because i don't know how to call a function to count all rows, so i use this one with checking id. If you know how to do this, this would be better solution for me.
So...that will check and call this:
<script>
function clearchat_time(){
setInterval(count_chat_time, 5000);
}
function count_chat_time(){
$sql = "TRUNCATE TABLE chat";
$query = $handler->prepare($sql);
$query->execute();
}
</script>
Your question doesn't define the versions of PHP, JavaScript or SQL -- so I am assuming PHP because of the dollar signs in the question (with some JavaScript concern about the "<script>" tag in the second part of the example) and MySQL, since that is most common with PHP in a LAMP stack.`
Using SQL auto-insert IDs in this way may not be portable to other SQLs is potentially untrustworthy since (given the various dialects of SQL) the IDs are not guaranteed to always be sequential, nor that the latest one is the highest. Additionally if the database & table were set up by others, you cannot assume that the increment is only by one, or that it starts at one (1).
It is probably safer to check the actual total number of rows in the table as below:
$result = mysql_query("SELECT count(*) as total from chat");
$data = mysql_fetch_assoc($result);
echo $data['total'];
Related
Suppose I have normal "Add Users" module that I want to automate using Java scripting, how I can avoid data duplication to avoid error message such as "User already exist"?
There are numerous ways in which this can be automated. You are receiving 'User already exists' due to fact that you're (probably) running your 'Add Users' test cases using static variables.
Note: For the following examples I will consider a basic registration flow/scenario of a new user: name, email, password being the required fields.
Note-002: My language of choice will be JavaScript. You should be able to reproduce the concept with Java with ease.
1.) Pre-pending/Post-pending a unique identifier to the information you're submitting (e.g.: Date() returns the number of seconds that have elapsed since January 1, 1970 => it will always be unique when running your test case)
var timestamp = Number(new Date());
var email = 'test.e2e' + timestamp + '#<yourMainDomainHere>'
Note: Usually, the name & password don't need to be unique, so you can actually use hardcoaded values without any issues.
2.) The same thing can also be achieved using the Math.random() (for JS), which returns a value between 0 and 1 (0.8018194703223693), 18 digits long.
var almostUnique = Math.random();
// You can go ahead and gen only the decimals
almostUnique = almostUnique.toString().split('.')[1];
var email = 'test.e2e' + almostUnique + '#<yourMainDomainHere>'
!!! Warning: While Math.random() is not actually unique, in hundreds of regression runs of 200 functional test cases, I didn't have the chance of seeing a duplicate.
3.) (Not so elegant | Harder exponentially harder to implement) If you have access to your web-apps backend API and through it you can execute different actions in the DB, then you can actually write yourself some scripts that will run after your registration test cases, like a cleanup suite.
These scripts will have to remove the previously added user from your database.
Hope this helps!
You don't mention what language you are coding in, but use whatever language you use's random function to generate random numbers and/or text for the user id. It won't guarantee that there will not be a duplicate, but the nature of testing is such that you should be able to handle both situations anyway. If this is not clear or I don't understand your question correctly, you'll need to provide a lot more information: what you've tried, what language you use, etc.
I want the no of records available in the database for the current query but without considering the LIMIT.
$this->Orders->find('all')
->where(['order_quantity']=>5)
->LIMIT(5);
Let's consider, I have 50 no of records for this above query. So just want the no of records available for the current query. I can't use 'count()' because of the limit it will always return total no of records available is less than or equal to 5. Is there any solution in cakePHP.
This page in the CakePHP 3 book, explains EXACTLY the answer to your question including how and why it works:
Returning the Total Count of Records
Using a single query object, it is possible to obtain the total number
of rows found for a set of conditions:
$total = $articles->find()->where(['is_active' => true])->count();
The count() method will ignore the limit, offset and page clauses,
thus the following will return the same result:
$total = $articles->find()->where(['is_active' => true])->limit(10)->count();
This is useful when you need to know the total result set size in
advance, without having to construct another Query object. Likewise,
all result formatting and map-reduce routines are ignored when using
the count() method.
Notice the bit about "... will ignore the limit, offset, and page clauses"
So try something like this:
$data = $articles->find()->where(['is_active' => true])->limit(10);
$count = $data->count();
I don't think you are familiar with the use of limit in find. So, I suggest you to study the docs.
The limit in your query means that the query will only display first 5 data even if the query actually has 50 data.
So, in order to get the actual data, you just need to remove the limit and make some changes in your code as follows:
$this->Orders->find('count')->where(['order_quantity' => 5]);
My data model has an entity Person with 3 related (1:N) entities Jobs, Tasks and Dates.
My query looks like
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name),
TaskDates = x.Tasks.Select(y => y.Date),
DateInfos = x.Dates.Select(y => y.Info)
}).ToList();
Everything seems to work fine, but the lists JobNames, TaskDates and DateInfos are not all filled.
For example, TaskDates and DateInfos have the correct values, but JobNames stays empty. But when I remove TaskDates from the query, then JobNames is correctly filled.
So it seems that EF can only handle a limited number of these "subqueries"? Is this correct? If so, what is the max. number of these "subqueries" for a single statement? Is there a way to work around these issue without having to make more than one call to the database?
(ps: I'm not entirely sure, but I seem to remember that this query worked in LINQ2SQL - could it be?)
UPDATE
I'm getting crazy about this. I tried to repro the issue from ground up using a fresh, simple project (to post the entire piece of code here, not only an oversimplified example) - and I found I wasn't able to repro it. It still happens within our existing code base (apparently there's more behind this problem, but I cannot share this closed code base, unfortunately).
After hours and hours of playing around I found the weirdest behavior:
It works great when I don't SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; before calling the LINQ statement
It also works great (independent of the above) when I don't use a .Take() to only get the first X rows
It also works great when I add an additional .Where() statements to cut the the number of rows returned from SQL Server
I didn't find any comprehensible reason why I see this behavior, but I started to look at the SQL: Although EF generates the exact same SQL, the execution plan is different when I use READ UNCOMMITTED. It returns more rows on a specific index in the middle of the execution plan, which curiously ends in less rows returned for the entire SQL statement - which in turn results in the missing data, that is the reason for my question to begin with.
This sounds very confusing and unbelievable, I know, but this is the behavior I see. I don't know what else to do, I don't even know what to google for at this point ;-).
I can fix my problem (just don't use READ UNCOMMITTED), but I have no idea why it occurs and if it is a bug or something I don't know about SQL Server. Maybe there's some "magic max number of allowed results in sub-queries" in SQL Server? At least: As far as I can see, it's not an issue with EF itself.
A little late, but does calling ToList() on each subquery produce the required effect?
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name.ToList()),
TaskDates = x.Tasks.Select(y => y.Date).ToList(),
DateInfos = x.Dates.Select(y => y.Info).ToList()
}).ToList();
Using the PHP library for salesforce I am running:
SELECT ... FROM Account LIMIT 100
But the LIMIT is always capped at 25 records. I am selecting many fields (60 fields). Is this a concrete limit?
The skeleton code:
$client = new SforceEnterpriseClient();
$client->createConnection("EnterpriseSandboxWSDL.xml");
$client->login(USERNAME, PASSWORD.SECURITY_TOKEN);
$query = "SELECT ... FROM Account LIMIT 100";
$response = $client->query($query);
foreach ($response->records as $record) {
// ... there's only 25 records
}
Here is my check list
1) Make sure you have more than 25 records
2) after your first loop do queryMore to check if there are more records
3) make sure batchSize is not set to 25
I don't use PHP library for Salesforce. But I can assume that before doing
SELECT ... FROM Account LIMIT 100
some more select queries have been performed. If you don't code them that maybe PHP library does it for you ;-)
The Salesforce soap API query method will only return a finite number of rows. There are a couple of reasons why it may be returning less than your defined limit.
The QueryOptions header batchSize has been set to 25. If this is the case, you could try adjusting it. If it hasn't been explicitly set, you could try setting it to a larger value.
When the SOQL statement selects a number of large fields (such as two or more custom fields of type long text) then Salesforce may return fewer records than defined in the batchSize. The reduction in batch size also occurs when dealing with base64 encoded fields, such as the Attachment.Body. If this is the case they you can just use queryMore with the QueryLocator from the first response.
In both cases, check the done and size properties of the done and size properties of the QueryResult to determine if you need to use queryMore and the total number of rows that match the SOQL query.
To avoid governor limits it might be better to add all the records to a list then do everything you need to do to the records in the list. After you done just update your database using: update listName;
Does anybody have experience with getting random results from index with +100,000,000 (100 million) records.
The goal is getting 30 results ordered by random, at least 100 times per second.
Actually my records are in MySQL but selecting ORDER BY RAND() from huge tables is the most easiest way to kill MySQL.
Sphinxsearch or whatever what do you recommend?
I dont have that big an index to try.
barry#server:~/modules/sphinx-2.0.1-beta/api# time php test.php -i gi_stemmed --sortby #random --select id
Query '' retrieved 20 of 3067775 matches in 0.081 sec.
Query stats:
Matches:
<SNIP>
real 0m0.100s
user 0m0.010s
sys 0m0.010s
This is on a reasonably powerful dedicated server - that is serving live queries (~20qps)
But to be honest if you dont need filtering (ie each query has a 'WHERE' clause), you can just setup a system that returns random results - can do this with mysql. Just using ORDER BY RAND() is evil (and sphinx while better at sorting than mysql is still doing basically the same thing).
How 'sparse' is your data? If most of your ids are used, can just do soemthing like
$ids = array();
$max = getOne("SELECT MAX(id) FROM table");
foreach(range(1,30) as $idx) {
$ids[] = rand(1,$max);
}
$query = "SELECT * FROM table WHERE id IN (".implode(',',$ids).")";
(may want to use shuffle() in php on the results afterwards as you likly to get the results out of mysql in id order)
Which will be much more efficient. If you do have holes, perhaps just lookup 33 rows. Sometimes will get more than need, (just discard), but you should still get 30 most of the times.
(Of course you could cache the '$max' somewhere, so it doesnt have to be looked up all the time.)
Otherwise you could setup a dedicated 'shuffled' list. Basically a FIFO buffer, have one thread, filling it with random results (perhaps using the above system, using 3000 ids at a time) and then the consumers just read random results directly out of this queue.
FIFO, is not particully easy to implement with mysql, so maybe use a different system - maybe redis, or even just memcache.