Environment: Solr 1.4 on Windows/MS SQL Server
A write lock is getting created whenever I am trying to do a full-import of documents using DIH. Logs say "Creating a connection with the database....." and the process is not going forward (Not getting a database connection). So the indexes are not getting created. Note that no other process is accessing the index and even I restarted my MS SQL Server service. However still I see a write.lock file in my index directory.
What could be the reason for this? Even I have set the flag unlockOnStartup in solrconfig to be true, still the indexing is not happening.
Problem was resolved. There was some issue with the java update and the microsoft jdbc driver.
Related
Production server : Solr 5.4.1, Ruby on rails, Ubuntu server.
Solr is suddenly stopped, when I restarted, it work to select/get data but for any update/reindex record job execute, again Solr is stopped. In log also I can not find any error statement.
I have compared the solr log for running system and stopped system and found that after runing DirectUpdateHander2 end_commit_flush, below log does not exist on non-working system log:
97588877 INFO (searcherExecutor-7-thread-1-processing-x:namecol) [x:namecol] o.a.s.c.SolrCore [namecol] Registered new searcher Searcher#1bf35cb6[namecol main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_3rc22(5.4.1):C68771/19227:delGen=227) Uninverting(_4ee4k(5.4.1):C43777/12974) Uninverting(_4fogn(5.4.1):C13374/2400) Uninverting(_4fopo(5.4.1):c1712/83) Uninverting(_4fomr(5.4.1):c1150/216) Uninverting(_4foqs(5.4.1):c995/64) Uninverting(_4for4(5.4.1):c156) Uninverting(_4for8(5.4.1):c94) Uninverting(_4for9(5.4.1):c3)))}
Which part do I need to check? I have set softCommit to -1 so now solr is not stopped after any frontend changes but also not update the select data also until not restart it again.
As a workaround, I have created a new core and re-index all data again.
And also updated the Solr version to 8.8.2 for the better stable release.
Solr version 8.5.1
My solr is not starting anymore. I use solr start command to start the Solr. Every time I run this command I see the following error
Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory.
Waiting up to 30 to see Solr running on port 8983
ERROR: Solr at http://localhost:8983/solr did not come online within 30 seconds!
There is no error in the log files. But connecting to Solr is failing. This was working earlier.
Could someone please help me to troubleshoot the issue?
I found out what the issue is. Even though the message indicated that the server did not start in 30 seconds, it started after some time.
I closed the console window as the server was running in the background and it killed the server. The server is up as long as I keep the command window that I used to start the server.
I am working on an ios project that has a Sybase (ultralite) database that is synchronized with a Sybase Sql Anywhere 12 database using mobilink.
Everything was properly, until i decided today to add some fields to the main database so that they synchronize to the main database.
I have updated the schema of the consolidated database from the main engine, then i have updated the schema of the remote database from the consolidated engine, and then i mapped the added fields together, and I deployed a new ultralite database.
Please note that it's not the first time I do a similar task, i always add fields, and sync databases..
after the update, when i synchronize using the blank ultralite database, mobilink will fail giving only this error: Synchronization Failed: -1305 (MOBILINK_COMMUNICATIONS_ERROR) %1:201 %2: %3:0
I have researched Error Number 201 in sybase and it points to: SQLE_NOT_PUBLIC_ID
and in the sybase documentation the error's probably cause is:
"The option specified in the SET OPTION statement is PUBLIC only. You cannot define this option for any other user."
I have tried to redeploy, I have tried to move the engine to a windows pc, all give the same error.. and i have no clue where this SET OPTION statement came from and how can i solve it..
Any hints are appreciated!
The problem was just caused by small network timeout value while setting up mobilink parameters.
info.stream_parms = (char*) #"host=192.168.0.100;port=3309;timeout=1"
i just changed the value from timeout=1 to timeout=300 and it worked!
I am trying to set up a merge replication using web synchronization between a publishing SQL Server 2012 standard and subscribing SQL Server 2012 Express. After following the instructions provided at Technet, I am stuck on this:
Source: Merge Process(Web Sync Server)
Number: -2147200985
Message: The subscription to publication 'MyMergePublication' has expired or does not exist.
I already verified that SSL certification are good, that I can browse to the publishing machine's URL https:\\mycomputer\replisapi.dll and get the expected output. I already verified that snapshot was set up and I took a giant hammer & use an administrator account to run the pool identity which is really bad security-wise but wanted to validate that it was not security that was tripping me up.
To further the mystery, when I try and fail to sync, the publisher acknowledges that a new subscriber has been registered, but it cannot get the snapshot at all and thus subscriber database is still empty.
On the replication monitor, there are no failed synchronization history, or any errors; all it has to say is that the subscriber is uninitialized, and no more.
Turning up the verbosity of the merge agent, I saw some sql being executed and tried replicating the sql and i found this was failing with same error:
{call sys.sp_MSgetreplicainfo(?,?,?,?,?,?,?,90)}
I called it with only the 3 mandatory parameters supplied and it would fail. That is despite the prior call sp_helpmergepublication does return a row for that publication. Oddly, the content of sp_helpmergepublication does not match what I configured for the subscription (e.g. it says web url is null when viewing the properties correctly shows the web url being set). Not sure that is significant.
The content of sp_MSgetreplicainfo contains a call to another system sprocs that I cannot run for some reason (says not found) so I'm not sure what is actually going on here.
Any clues would be greatly appreciated.
I am developing a Servlet applicaiton. It obtains a database connection from the connection pool supported by the Tomcat container to query and update database data.
I run into a problem. The Servlet gets a database connection and then add a new table row or delete a table row. After that, it commits the change. Later, a connection is obtained to execute queries. I find that the data returned from the queries using the second connection do not reflect the change made with the first database connection.
Isn't it strange? The changes made with the first database connection have been committed successfully. Why the new rows inserted do not appear in the later query? Why the rows deleted still appear in the later query?
Does it relate to the setting of transaction level?
Can anyone help?
03-12: More Information (#1):
I use MySQL Community Server 5.6.
My servlet runs on Tomcat 7.0.41.0.
The Resource element in the conf/server.xml is as follows:
<Resource type="javax.sql.DataSource"
name="jdbc/storewscloud"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/myappdb"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
username="root"
password="xxxxxxxxxx"
/></li>
I do not use any cache explicitly.
Every time the servlet gets a database connection, it turns the auto-commit mode of the connection off.
When the servlet is invoked, a database connection is obtained. The servet uses it to update data in the database. After that, it commits the changes. Then, it uses Apache HttpClients to invoke the same servlet to do some other thing which also obtains a database connection and execute query. The later query returns 'old' data. If I refresh the web page, the latest data are shown. It looks like some party, mysql jdbc driver or connection object, cache the data somewhere. I have no clue.
03-12: More Information (#2):
I did an experiment getting a connection without using the connection pool. The result is correct. So, the problem is caused by the connection pool.
To make the query return right data using the 2nd connection from the pool, I need to not only commit the data changes using the 1st connection from the pool but also CLOSE the 1st connection.
It seems that the data changes made are not completely saved in the database even the commit() is called until the close() is called.
Why?
I found that there is a new version of C3P0 connection pool released recently. I gave it a try. It works! The problems I had do not occur. Therefore, I use it to replace the bundled connection pool of the Tomcat server. For those who encounter the same problem as I do, C3P0 maybe a solution for you too.
C3P0 Project URL