Liquibase - generating a changelog for existing Sybase db - sybase

I'm looking to set up Liquibase for a project with an existing Sybase db. I've tried running the following command:
lb --driver=net.sourceforge.jtds.jdbc.Driver \
--classpath=C:\<home>\.m2\repository\net\sourceforge\jtds\jtds\1.2.8\jtds-1.2.8.jar \
--changeLogFile=testInitialChangelog.xml \
--url="jdbc:jtds:sybase://<host>:<port>/<dbname>" \
--username="<dbuser>" \
--password="<dbpwd>" \
--defaultSchemaName=<dbname> \
--logLevel=debug \
generateChangeLog
It's not worked - well, it's produced an empty changelog. The output was:
Picked up JAVA_TOOL_OPTIONS: -Duser.home=C:\<home>
DEBUG 29/06/17 17:07: liquibase: Connected to <dbuser>#jdbc:jtds:sybase://<host>:<port>/<dbname>
DEBUG 29/06/17 17:07: liquibase: Not adjusting the auto commit mode; it is already true
DEBUG 29/06/17 17:07: liquibase: Computed checksum for 1498752444338 as a8b042e5c46068977523e7071dff7a0f
WARNING 29/06/17 17:07: liquibase: Did not find schema '<dbname>' to snapshot
INFO 29/06/17 17:07: liquibase: Can not use class liquibase.serializer.core.yaml.YamlChangeLogSerializer as a Liquibase service because org.yaml.snakeyaml.representer.Representer is not in the classpath
INFO 29/06/17 17:07: liquibase: Can not use class liquibase.serializer.core.json.JsonChangeLogSerializer as a Liquibase service because org.yaml.snakeyaml.representer.Representer is not in the classpath
INFO 29/06/17 17:07: liquibase: testInitialChangelog.xml does not exist, creating
DEBUG 29/06/17 17:07: liquibase: MissingObjectChangeGenerator type order: liquibase.structure.core.Catalog liquibase.structure.core.Schema liquibase.structure.core.Sequence liquibase.structure.core.StoredProcedure liquibase.
structure.core.Table liquibase.structure.core.Column liquibase.structure.core.PrimaryKey liquibase.structure.core.UniqueConstraint liquibase.structure.core.Index liquibase.structure.core.ForeignKey liquibase.structure.core
.View
DEBUG 29/06/17 17:07: liquibase: UnexpectedObjectChangeGenerator type order: liquibase.structure.core.Catalog liquibase.structure.core.ForeignKey liquibase.structure.core.Schema liquibase.structure.core.StoredProcedure liqui
base.structure.core.UniqueConstraint liquibase.structure.core.View liquibase.structure.core.Table liquibase.structure.core.PrimaryKey liquibase.structure.core.Column liquibase.structure.core.Index liquibase.structure.core.
Sequence
DEBUG 29/06/17 17:07: liquibase: ChangedObjectChangeGenerator type order: liquibase.structure.core.Catalog liquibase.structure.core.ForeignKey liquibase.structure.core.Schema liquibase.structure.core.Sequence liquibase.struc
ture.core.StoredProcedure liquibase.structure.core.Table liquibase.structure.core.Column liquibase.structure.core.PrimaryKey liquibase.structure.core.UniqueConstraint liquibase.structure.core.Index liquibase.structure.core
.View
Liquibase 'generateChangeLog' Successful

In this case it was quite simple - the --defaultSchemaName= parameter. Set to defaultSchemaName=dbo and it worked.

Related

Sonar-runner is unable to generate the issue-report folder

I am using sonar-runner-2.4 on sonarqube server 4.5.6 ( Windows 7-32 bit).
I am running the analysis on a sample c++ code.
The strange thing is, the output at console still shows that analysis is complete.
However no issue-report folder gets generated in .sonar folder.
Below is the snippet of console output after running sonar-runner
14:35:58.160 INFO - Store results in database
14:35:58.160 DEBUG - Execute org.sonar.batch.index.MeasurePersister
14:35:58.255 DEBUG - Execute org.sonar.batch.index.DuplicationPersister
14:35:58.260 DEBUG - Execute org.sonar.batch.index.ComponentDataPersister
14:35:58.280 DEBUG - Execute org.sonar.batch.issue.IssuePersister
14:35:58.285 DEBUG - Execute org.sonar.batch.phases.GraphPersister
14:35:58.350 INFO - ANALYSIS SUCCESSFUL, you can browse http://x.x.x.x:9000/dashboard/index/test_dummy
14:35:58.350 DEBUG - Evict preview database
14:35:58.350 DEBUG - Download: http://x.x.x.x:9000/batch_bootstrap/evict?project=1725 (no proxy)
14:35:58.425 DEBUG - Post-jobs : org.sonar.issuesreport.ReportJob#d718c1 -> org.sonar.plugins.core.issue.notification.SendIssueNotificationsPostJob#19a2e83 -> org.sonar.plugins.core.batch.IndexProjectPostJob#163c13b -> org.sonar.plu
14:35:58.425 INFO - Executing post-job class org.sonar.issuesreport.ReportJob
14:35:58.425 INFO - Executing post-job class org.sonar.plugins.core.issue.notification.SendIssueNotificationsPostJob
14:35:58.425 INFO - Executing post-job class org.sonar.plugins.core.batch.IndexProjectPostJob
14:35:58.450 INFO - Executing post-job class org.sonar.plugins.dbcleaner.ProjectPurgePostJob
14:35:58.460 INFO - -> Keep one snapshot per day between 2016-02-05 and 2016-03-03
14:35:58.460 INFO - -> Keep one snapshot per week between 2015-03-06 and 2016-02-05
14:35:58.460 INFO - -> Keep one snapshot per month between 2011-03-11 and 2015-03-06
14:35:58.465 INFO - -> Delete data prior to: 2011-03-11
14:35:58.470 DEBUG - ==> Preparing: select * from projects p where p.id=?
14:35:58.480 DEBUG - ==> Parameters: 1725(Long)
14:35:58.485 DEBUG - <== Total: 1
14:35:58.485 DEBUG - ==> Preparing: select * from projects where scope='PRJ' and root_id=?
14:35:58.485 DEBUG - ==> Parameters: 1725(Long)
14:35:58.490 DEBUG - <== Total: 0
14:35:58.490 INFO - -> Clean test_dummy [id=1725]
14:35:58.490 INFO - <- Clean snapshot 16948
14:35:58.640 DEBUG - Release semaphore on project : org.sonar.api.resources.Project#1e187ff[id=1725,key=test_dummy,qualifier=TRK], with key batch-test_dummy
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
Total time: 5.940s
Final Memory: 11M/109M
INFO: ------------------------------------------------------------------------
Sonar-project.properties file
sonar.projectKey=test_dummy
sonar.projectName=test_dummy
sonar.projectVersion=1.0
sonar.sources=.
sonar.sourceEncoding=UTF-8
sonar.language=c++
Is there anything that I am still missing in terms of configuration or anything in general?
Thanks in advance...
As per Issues Report Plugin documentation:
To get an HTML report, set the sonar.issuesReport.html.enable property to true
You can also enable this by default in the SonarQube General Settings ('Issues Report' section).

Integration between Nutch 1.11(1.x) and Solr 5.3.1(5.x)

I just started using Nutch 1.11 and Solr 5.3.1.
I want to crawl data with Nutch, then index and prepare for searching with Solr.
I know how to crawl data from web using Nutch's bin/crawl command, and successfully got much data from a website in my local.
I also started a new Solr server in local with below command under Solr root folder,
bin/solr start
And started the example files core under the example folder with below command:
bin/solr create -c files -d example/files/conf
And I can login below admin url and manage the files core,
http://localhost:8983/solr/#/files
So I believe I started the Solr correctly, and started to post the Nutch data into Solr with Nutch's bin/nutch index command:
bin/nutch index crawl/crawldb \
-linkdb crawl/linkdb \
-params solr.server.url=127.0.0.1:8983/solr/files \
-dir crawl/segments
Hoping with Solr5's new Auto Schema feature, I can put myself restful, however, I got below error(copy from log file):
WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
INFO segment.SegmentChecker - Segment dir is complete: file:/user/nutch/apache-nutch-1.11/crawl/segments/s1.
INFO segment.SegmentChecker - Segment dir is complete: file:/user/nutch/apache-nutch-1.11/crawl/segments/s2.
INFO segment.SegmentChecker - Segment dir is complete: file:/user/nutch/apache-nutch-1.11/crawl/segments/s3.
INFO indexer.IndexingJob - Indexer: starting at 2015-12-14 15:21:39
INFO indexer.IndexingJob - Indexer: deleting gone documents: false
INFO indexer.IndexingJob - Indexer: URL filtering: false
INFO indexer.IndexingJob - Indexer: URL normalizing: false
INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
INFO indexer.IndexingJob - Active IndexWriters :
SolrIndexWriter
solr.server.type : Type of SolrServer to communicate with (default 'http' however options include 'cloud', 'lb' and 'concurrent')
solr.server.url : URL of the Solr instance (mandatory)
solr.zookeeper.url : URL of the Zookeeper URL (mandatory if 'cloud' value for solr.server.type)
solr.loadbalance.urls : Comma-separated string of Solr server strings to be used (madatory if 'lb' value for solr.server.type)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.commit.size : buffer size when sending to Solr (default 1000)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
INFO indexer.IndexerMapReduce - IndexerMapReduce: crawldb: crawl/crawldb
INFO indexer.IndexerMapReduce - IndexerMapReduce: linkdb: crawl/linkdb
INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: file:/user/nutch/apache-nutch-1.11/crawl/segments/s1
INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: file:/user/nutch/apache-nutch-1.11/crawl/segments/s2
INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: file:/user/nutch/apache-nutch-1.11/crawl/segments/s3
WARN conf.Configuration - file:/tmp/hadoop-user/mapred/staging/user117437667/.staging/job_local117437667_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
WARN conf.Configuration - file:/tmp/hadoop-user/mapred/staging/user117437667/.staging/job_local117437667_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
WARN conf.Configuration - file:/tmp/hadoop-user/mapred/local/localRunner/user/job_local117437667_0001/job_local117437667_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
WARN conf.Configuration - file:/tmp/hadoop-user/mapred/local/localRunner/user/job_local117437667_0001/job_local117437667_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
INFO solr.SolrMappingReader - source: content dest: content
INFO solr.SolrMappingReader - source: title dest: title
INFO solr.SolrMappingReader - source: host dest: host
INFO solr.SolrMappingReader - source: segment dest: segment
INFO solr.SolrMappingReader - source: boost dest: boost
INFO solr.SolrMappingReader - source: digest dest: digest
INFO solr.SolrMappingReader - source: tstamp dest: tstamp
INFO solr.SolrIndexWriter - Indexing 250 documents
INFO solr.SolrIndexWriter - Deleting 0 documents
INFO solr.SolrIndexWriter - Indexing 250 documents
WARN mapred.LocalJobRunner - job_local117437667_0001
java.lang.Exception: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/update. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/update. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:512)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:134)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:85)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:493)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:422)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:356)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:56)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:145)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:222)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:231)
I remember this
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected mime type application/octet-stream but got text/html.
Is something related to the Solr url, but I double check with the url I used 127.0.0.1:8983/solr/files, I think it is correct.
Does anyone know what the problem is? I search on the web and in here, got nothing useful.
Note: I also tried the way which disabled Solr5's Auto Schema feature in examples/files/conf/solrconfig.xml and replaced examples/files/conf/managed-schema.xml with Nutch's conf/schema.xml, still hit the same error.
Update: After trying the DEPRECATED command bin/nutch solrindex(Thanks to Thangaperumal), the previous error is gone but hit another error:
bin/nutch solrindex http://127.0.0.1:8983/solr/files crawl/crawldb -linkdb crawl/linkdb crawl/segments/s1
Error message:
INFO solr.SolrIndexWriter - Indexing 250 documents
INFO solr.SolrIndexWriter - Deleting 0 documents
INFO solr.SolrIndexWriter - Indexing 250 documents
INFO solr.SolrIndexWriter - Deleting 0 documents
INFO solr.SolrIndexWriter - Indexing 250 documents
WARN mapred.LocalJobRunner - job_local1306504137_0001
java.lang.Exception: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Unable to invoke function processAdd in script: update-script.js: Can't unambiguously select between fixed arity signatures [(java.lang.String, java.io.Reader), (java.lang.String, java.lang.String)] of the method org.apache.solr.analysis.TokenizerChain.tokenStream for argument types [java.lang.String, null]
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Unable to invoke function processAdd in script: update-script.js: Can't unambiguously select between fixed arity signatures [(java.lang.String, java.io.Reader), (java.lang.String, java.lang.String)] of the method org.apache.solr.analysis.TokenizerChain.tokenStream for argument types [java.lang.String, null]
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:134)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:85)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:493)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:422)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:356)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:56)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:145)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:222)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:231)
Instead, Try this statement to integrate solr and nutch
bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/
Have you tried specifying the Solr URL using:
-D solr.server.url=http://localhost:8983/solr/files
instead of the -params approach? At least this is the right syntax for the crawl script. And since both invoke an underline java class to do the work should work.
bin/nutch index crawl/crawldb \
-linkdb crawl/linkdb \
-D solr.server.url=http://127.0.0.1:8983/solr/files \
-dir crawl/segments

Getting NoMethodError when using jdbc sqlserver adapter with jRuby on Rails

I have a jRuby on Rails application hosted on IIS 8 on Windows Server 2012 R2 (I know, I must be insane). After about 2 weeks of painful programming I have a working application but I'm trying to configure it to use SQL Server 2012 Express via the Microsoft JDBC SQL Server driver instead of the default sqlite3 database. So far, this is not going well.
After configuring my database.yml file I tried to run: bundle exec rake db:migrate RAILS_ENV=production --traceI am met with the error rake aborted! NoMethodError: undefined method 'type' for "nvarchar(255)":StringA full trace of my error can be seen below:
rake db:migrate
C:\inetpub\wwwroot\rails>bundle exec rake db:migrate RAILS_ENV=production --trac
e
io/console not supported; tty will not be manipulated
io/console not supported; tty will not be manipulated
NOTE: ActiveRecord 4.2 is not (yet) fully supported by AR-JDBC, please help us f
inish 4.2 support - check http://bit.ly/jruby-42 for starters
** Invoke db:migrate (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke db:load_config (first_time)
** Execute db:load_config
** Execute db:migrate
rake aborted!
NoMethodError: undefined method `type' for "nvarchar(255)":String
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_methods/time_zone_conversion.rb:64:in `create_time_zone_conversion_attr
ibute?'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_methods/time_zone_conversion.rb:53:in `inherited'
org/jruby/RubyProc.java:271:in `call'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:61:in `matching'
org/jruby/RubyArray.java:2470:in `select'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:60:in `matching'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:56:in `decorators_for'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:47:in `apply'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:29:in `add_user_provided_columns'
org/jruby/RubyArray.java:2414:in `map'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttribute_decorators.rb:28:in `add_user_provided_columns'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttributes.rb:93:in `columns'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/a
ttributes.rb:98:in `columns_hash'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/q
uerying.rb:41:in `find_by_sql'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/r
elation.rb:638:in `exec_queries'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/r
elation.rb:514:in `load'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/r
elation.rb:243:in `to_a'
C:0:in `map'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:844:in `get_all_versions'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:985:in `migrated'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:990:in `ran?'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:967:in `runnable'
org/jruby/RubyArray.java:2640:in `reject!'
org/jruby/RubyArray.java:2611:in `reject'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:967:in `runnable'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:952:in `migrate'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:820:in `up'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/m
igration.rb:798:in `migrate'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/t
asks/database_tasks.rb:137:in `migrate'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/activerecord-4.2.2/lib/active_record/r
ailties/databases.rake:44:in `(root)'
org/jruby/RubyProc.java:271:in `call'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/task.rb:240:in `e
xecute'
org/jruby/RubyArray.java:1613:in `each'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/task.rb:235:in `e
xecute'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/task.rb:179:in `i
nvoke_with_call_chain'
C:/jruby-1.7.22/lib/ruby/1.9/monitor.rb:211:in `mon_synchronize'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/task.rb:172:in `i
nvoke_with_call_chain'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/task.rb:165:in `i
nvoke'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:15
0:in `invoke_task'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:10
6:in `top_level'
org/jruby/RubyArray.java:1613:in `each'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:10
6:in `top_level'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:11
5:in `run_with_threads'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:10
0:in `top_level'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:78
:in `run'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:17
6:in `standard_exception_handling'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/lib/rake/application.rb:75
:in `run'
C:/jruby-1.7.22/lib/ruby/gems/shared/gems/rake-10.4.2/bin/rake:33:in `(root)'
org/jruby/RubyKernel.java:1059:in `load'
C:\jruby-1.7.22\bin\rake:23:in `(root)'
Tasks: TOP => db:migrate
database.yml
# SQLite version 3.x
# gem install sqlite3
#
# Ensure the SQLite 3 gem is defined in your Gemfile
# gem 'sqlite3'
#
default: &default
adapter: sqlite3
pool: 5
timeout: 5000
development:
<<: *default
database: db/development.sqlite3
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: db/test.sqlite3
production:
adapter: sqlserver
database: smdb
url: jdbc:sqlserver://localhost;instanceName=SQLEXPRESS;databaseName=smdb;user=sa;password=password123;
Gemfile
source 'https://rubygems.org'
ruby '1.9.3', :engine => 'jruby', :engine_version => '1.7.22'
gem 'rails', '4.2.2'
gem 'bootstrap-sass', '3.2.0.0'
gem 'sass-rails', '5.0.2'
gem 'uglifier', '2.5.3'
gem 'coffee-rails', '4.1.0'
gem 'coffee-script-source', '1.8.0'
gem 'jquery-rails', '4.0.3'
gem 'turbolinks', '2.3.0'
gem 'jbuilder', '2.2.3'
gem 'faker', '1.4.2'
gem 'tzinfo-data', '1.2015.6'
gem 'sdoc', '0.4.0', group: :doc
group :development, :test do
gem 'activerecord-jdbcsqlite3-adapter', '1.3.17'
gem 'spring', '1.1.3'
end
group :test do
gem 'minitest-reporters', '1.0.5'
gem 'mini_backtrace', '0.1.3'
gem 'guard-minitest', '2.3.1'
end
group :production do
gem 'trinidad', '1.4.6'
gem 'deprecated', '3.0.1'
gem 'activerecord-jdbcmssql-adapter', '1.3.17'
end
And finally my two Database Migration files:
20150805101650_create_servers.rb
class CreateServers < ActiveRecord::Migration
def change
create_table :servers do |t|
t.string :server_name
t.string :application
t.string :server_role
t.string :team_contact
t.string :individual_contact
t.string :business_owner
t.string :vendor
t.string :vendor_contact
t.string :main_doc
t.string :main_win
t.timestamps null: false
end
end
end
20150805105953_add_index_to_server_name.rb
class AddIndexToServerName < ActiveRecord::Migration
def change
add_index :servers, :server_name, unique: true
end
end
I know that the instance is being hit as I was previously met with an authentication error which has gone since I added the username and password to the jdbc connection string
Solution
Downgraded to Rails 4.1.0 which seems to be more compatible with JDBC Adapter. Ran a rake db:migrate RAILS_ENV=production which resulted in another error. The error was resolved by removing config.active_record.raised_in_transactional_callbacks = true from application.rb
AR-JDBC does not handle Rails 4.2 esp. with databases such as MS-SQL, there's the warning :
NOTE: ActiveRecord 4.2 is not (yet) fully supported by AR-JDBC, please help us finish 4.2 support - check http://bit.ly/jruby-42 for starters
... SQLServer support needs your help/support!

Etherpad - PostgreSQL error: language "plpgsql" does not exist

I installed Etherpad lite and tried to use it with PostgreSQL database, but got this error:
events.js:72
throw er; // Unhandled 'error' event
^
error: language "plpgsql" does not exist
at Connection.parseE (/opt/openerp/etherpad/etherpad-lite/src/node_modules/$
at Connection.parseMessage (/opt/openerp/etherpad/etherpad-lite/src/node_mo$
at Socket.<anonymous> (/opt/openerp/etherpad/etherpad-lite/src/node_modules$
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at emitReadable (_stream_readable.js:404:5)
at readableAddChunk (_stream_readable.js:165:9)
at Socket.Readable.push (_stream_readable.js:127:10)
RESTART!
In other servers I didn't have such problem using PostgreSQL with Etherpad.
I created database using this command:
crate database etherpad WITH TEMPLATE template0;
My configuration in etherpad is like this:
"dbType" : "postgres",
"dbSettings" : {
"user" : "db_user",
"host" : "localhost",
"password": "my_password",
"database": "etherpad"
},
Everything else is left unchanged, except I commented dirty db settings.
P.S. with dirty db it works.
If you are using 9.1 and below, you should CREATE LANGUAGE plpgsql in template1, and then create your database based on that template. This should not happen or be required on PostgreSQL 9.2 and above.

Fileconveyor not syncing files, even though I get no errors. what am I doing wrong?

I've followed all of the directions for installing fileconveyor on my server, and much to my surprise, I think I installed everything correctly. I foun this article particularly useful.
When I run arbitrator.py I get the following output
2013-02-22 18:21:13,792 - Arbitrator - WARNING - File Conveyor is initializing.
2013-02-22 18:21:13,795 - Arbitrator - WARNING - Loaded config file.
/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py:110: DeprecationWarning: The SECRET_KEY setting must not be empty.
warnings.warn("The SECRET_KEY setting must not be empty.", DeprecationWarning)
2013-02-22 12:21:13,890 - Arbitrator - WARNING - Created 'cloudfiles' transporter for the 'Rackspace Cloud Files' server.
2013-02-22 12:21:13,890 - Arbitrator - WARNING - Server connection tests succesful!
2013-02-22 12:21:13,891 - Arbitrator - WARNING - Setup: created transporter pool for the 'Rackspace Cloud Files' server.
2013-02-22 12:21:13,893 - Arbitrator - WARNING - Setup: initialized 'pipeline' persistent queue, contains 0 items.
2013-02-22 12:21:13,893 - Arbitrator - WARNING - Setup: initialized 'files_in_pipeline' persistent list, contains 0 items.
2013-02-22 12:21:13,894 - Arbitrator - WARNING - Setup: initialized 'failed_files' persistent list, contains 0 items.
2013-02-22 12:21:13,895 - Arbitrator - WARNING - Setup: initialized 'files_to_delete' persistent list, contains 0 items.
2013-02-22 12:21:13,895 - Arbitrator - WARNING - Setup: moved 0 items from the 'files_in_pipeline' persistent list into the 'pipeline' persistent queue.
2013-02-22 12:21:13,896 - Arbitrator - WARNING - Setup: connected to the synced files DB. Contains metadata for 0 previously synced files.
2013-02-22 12:21:13,974 - Arbitrator - WARNING - Setup: initialized FSMonitor.
2013-02-22 12:21:13,976 - Arbitrator - WARNING - Fully up and running now.
I've double checked the config.xml to make sure it matched the directory where my files are. It seems to be working, it just wont sync the files. Any idea what I'm doing wrong?
You need to fix the secret key error first. You can use this link to generate a key and paste it in [installation_dir]/django/conf/global-settings.py http://www.miniwebtool.com/django-secret-key-generator/
my installation was in /usr/local/lib/python2.7/dist-packages/django
Hth,
Sadashiv.

Resources