Saving features to SQL Server using Openlayers and geoserver - sql-server

I am using the SQL Server plugin for geoserver (http://docs.geoserver.org/stable/en/user/data/database/sqlserver.html) to show some geometries using WMS. This works fine. I am also able to extract geometries as vectors without much trouble.
Now I need to add the retrieved vector to another layer and save it to a table in the SQL Server database. This is causing some problems.
This is some of the code:
saveStrategy = new OpenLayers.Strategy.Save();
saveStrategy.events.register("success", '', ChangesSuccess);
saveStrategy.events.register("fail", '', ChangesFailed);
function ChangesSuccess(e) {
alert('Done');
}
function ChangesFailed(e) {
alert('Failed');
}
selectionLayer = new OpenLayers.Layer.Vector(
"SelectionLayer",
{
strategies: [new OpenLayers.Strategy.BBOX(), saveStrategy]
, projection: new OpenLayers.Projection("EPSG:25832")
, protocol: new OpenLayers.Protocol.WFS({
version: "1.1.0",
url: "http://someserver.cloudapp.net:8181/geoserver/wfs",
featurePrefix: 'xxxx',
featureType: "xxxxxxxxx",
featureNS: "xxx.xxx/xxx",
geometryName: "xxxxx"
})
, displayInLayerSwitcher: false
});
selectControl.events.register("featureselected", this, function (e) {
var feat = e.feature;
feat.state = OpenLayers.State.INSERT;
selectionLayer.addFeatures([feat]);
saveStrategy.save();
);
When I try to save the newly added feature I get the following exception in the fail event of the save strategy:
"java.lang.AbstractMethodError:org.geotools.jdbc.BasicSQLDialect.encodeGeometryValue(Lcom/vividsolutions/jts/geom/Geometry;ILjava/lang/StringBuffer;)V org.geotools.jdbc.BasicSQLDialect.encodeGeometryValue(Lcom/vividsolutions/jts/geom/Geometry;ILjava/lang/StringBuffer;)V"
I don't know what to try from here, but if someone has encountered this before or has some suggestions on what could be the issue I am more than happy to hear about it.. It is 1:50am here now and are probably not going to bed before this is fixed so all suggestions are more than welcome :)

Related

Cypress 10 and connecting to an Oracle database

So I've got a new Cypress 10 project, and I'm trying to integrate some functionality to allow me to make some basic database calls to our Oracle database (which is on a server I have direct access to, not running locally).
I've been following this guide which shows how to add the oracledb package as a Cypress plugin, but the method used (using the /plugin directory) has been depreciated in Cypress 10 so I can't follow the example exactly.
I've instead tried applying this logic using the Cypress plugin documentation as a guide and I think I have something that almost works, but I can't seem to connect to any database, even if the location is in my tnsnames.ora file (although I'm providing the connection string directly for this particular project).
Here's what my cypress.config.ts file looks like, with the code I've created (I'm using Cucumber in my implementation too, thus why those references are present here):
import { defineConfig } from "cypress";
import createBundler from "#bahmutov/cypress-esbuild-preprocessor";
import { addCucumberPreprocessorPlugin } from "#badeball/cypress-cucumber-preprocessor";
import createEsbuildPlugin from "#badeball/cypress-cucumber-preprocessor/esbuild";
const oracledb = require("oracledb");
oracledb.initOracleClient({ libDir: "C:\\Users\\davethepunkyone\\instantclient_21_6" });
// This data is correct, I've obscured it for obvious reasons
const db_config = {
"user": "<username>",
"password": "<password>",
"connectString": "jdbc:oracle:thin:#<hostname>:<port>:<sid>"
}
const queryData = async(query, dbconfig) => {
let conn;
try{
// It's failing on this getConnection line
conn = await oracledb.getConnection(dbconfig);
console.log("NOTE===>connect established")
return await conn.execute(query);
}catch(err){
console.log("Error===>"+err)
return err
} finally{
if(conn){
try{
conn.close();
}catch(err){
console.log("Error===>"+err)
}
}
}
}
async function setupNodeEvents(
on: Cypress.PluginEvents, config: Cypress.PluginConfigOptions ): Promise<Cypress.PluginConfigOptions> {
await addCucumberPreprocessorPlugin(on, config);
on("file:preprocessor", createBundler({
plugins: [createEsbuildPlugin(config)],
})
);
on("task", {
sqlQuery: (query) => {
return queryData(query, db_config);
},
});
return config;
}
export default defineConfig({
e2e: {
specPattern: "**/*.feature",
supportFile: false,
setupNodeEvents,
},
});
I've then got some Cucumber code to run a test query:
Then("I do a test database call", () => {
// Again this is an example query for obvious reasons
const query = "SELECT id FROM table_name FETCH NEXT 1 ROWS ONLY"
cy.task("sqlQuery", query).then((resolvedValue: any) => {
resolvedValue["rows"].forEach((item: any) => {
console.log("result==>" + item);
});
})
})
And here are the dependencies from my package.json:
"dependencies": {
"#badeball/cypress-cucumber-preprocessor": "^12.0.0",
"#bahmutov/cypress-esbuild-preprocessor": "^2.1.3",
"cypress": "^10.4.0",
"oracledb": "^5.4.0",
"typescript": "^4.7.4"
},
I feel like I'm somewhat on the right track as when I run the feature step above, the error I get back is:
Error===>Error: ORA-12154: TNS:could not resolve the connect identifier specified
This makes me think that it has at least called the node-oracledb package to generate the error but I can't really tell if I've made an obvious error or not (I'm pretty new to JS/TS). I know I've referenced the right path for the oracle instant client and it's been initialized correctly at least because Cypress points out a config error if the path is incorrect. I know the database paths work as well because we have an older Selenium implementation that can connect using the details I'm providing.
I think I'm just more curious to know if anyone has so far successfully implemented an oracledb connection with Cypress 10 or if someone who has a bit more Cypress experience can spot any obvious errors in my code as resources for this particular combination of packages seem to be non-existent (possibly because Cypress 10 is reasonably new).
NOTE: I am planning on switching to using environmental variables for the database connection information that will eventually be passed into the project - I just want to get a connection working first before I tackle that issue.
Oracle's C stack drivers like node-oracledb are not using Java so the JDBC connection string needs changing from:
"connectString": "jdbc:oracle:thin:#<hostname>:<port>:<sid>"
If you were using:
jdbc:oracle:thin:#mydbmachine.example.com:1521/orclpdb1
then your Node.js code should use:
connectString : "mydbmachine.example.com:1521/orclpdb1"
Since you're using the very obsolete SID syntax, check the node-oracledb manual for the solution if you can't use a service name: JDBC and Oracle SQL Developer Connection Strings.

How to incorporate SQL Server in a Nuxt.js app

I am trying to use SQL Server with a Nuxt app, and incorporate some basic CRUD functionality with tables. Does anybody have any insight or examples on this? I understand (I think) that the calls to the db would be exposed in an api folder and registered as a serverMiddleware. Any examples would be appreciated! I'm currently using the node-mssql package as it seems to be the popular choice.
I would suggest, instead of a proxy library a real ORM library that would allow you to actually create your models and that CRUD operation would then be much easier to maintain rather than write all queries
A common library is Sequelize and you can easily start with something like
in your nuxt.config.js add your serverMiddleware pointing to a brand new folder, for example /api
module.exports = {
// ...
serverMiddleware: ['~/api/index.js'],
env: {
DB_HOST: process.env.DB_HOST || 'db-host',
DB_DATABASE: process.env.DB_DATABASE || 'db-database',
DB_USER: process.env.DB_USER || 'db-user',
DB_PASS: process.env.DB_PASS || 'db-pass'
},
// ...
}
and start creating your express/sequelize calls there just like you are creating a REST API
WIth Sequelize you have a vast number of dialects to choose from, as you want MS SQL, just install tedious package and configure it correctly
const Sequelize = require('sequelize');
const sequelize = new Sequelize(DB_DATABASE, DB_USER, DB_PASS, {
host: DB_HOST,
dialect: 'mssql',
logging: process.env.NODE_ENV !== 'production' ? console.log : false, // eslint-disable-line no-console
pool: {
max: 5,
min: 0,
idle: 10000,
},
define: {
engine: 'InnoDB',
collate: 'latin1_swedish_ci',
},
dialectOptions: {
// stream: proxyConnection,
options: {
encrypt: true,
requestTimeout: 300000,
enableArithAbort: false,
},
},
});
After the initial setup, just create your own Models and use them
Sequelize has a really big community ready to help you if you need, either in the docs or through Slack

Is it safe to access Elasticseach from a client without going through an API server?

For example, suppose you embed the following Javascript code in Vue.js or React.js.
var elasticsearch = require ('elasticsearch');
var esclient = new elasticsearch.Client ({
host: 'Elasticsearch host name of Elascticsearch Cloud's(URL?')
});
esclient.search ({
index: 'your index',
body: {
query: {
match: {message: 'search keyword'}
},
aggs: {
your_states: {
terms: {
field: 'your field',
size: 10
}
}
}
}
}
).then (function (response) {
var hits = response.hits.hits;
}
);
When aiming at a search engine of an application like stackoverflow,
if only GET from the public is OK by using the ROLE setting of the cloud of Elasticseach,
Even though I did not prepare an API server, I thought that the same thing could be realized with the above client side code,
Is it a security problem? (Such as whether it is dangerous for the host name to fall on the client side)
If there is no problem, the search engine response will be faster and the cost of implementation will be reduced,
I wondered why many people would not do it. (Because sample code like this can not be seen on the net much)
Thank you.
It is NOT a good idea.
If any client with a bit of programming knowledge finds our your ElasticSearch IP address, you are screwed, he could basically delete all the data without you even noticing.
I have no understanding about XPack Security, but if you are not using that you are absolutely forced to hide ES behind an API.
Then you also have to secure you ES domain to allow access only from the API server and block the rest of the world.

couchdb update design doc

I have a nodejs application where i connect to my couchdb using nano with the following script:
const { connectionString } = require('../config');
const nano = require('nano')(connectionString);
// creates database or fails silent if exists
nano.db.create('foo');
module.exports = {
foo: nano.db.use('foo')
}
This script is running on every server start, so it tries to create the database 'foo' every time the server (re)starts and just fails silently if the database already exists.
I like this idea a lot because this way I'm actually maintaining the database at the application level and don't have to create databases manually when I decide to add a new database.
Taking this approach one step further I also tried to maintain my design docs from application level.
...
nano.db.create('foo');
const foo = nano.db.use('foo');
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
foo.insert(design, (err) => {
if(err)
console.log('design insert failed');
})
module.exports = {
foo
}
Obviously this will only insert the design doc if it doesn't exist. But what if I updated my design doc and want to update it?
I tried:
foo.get("_design/foo", (err, doc) => {
if(err)
return foo.insert(design);
design._rev = doc._rev
foo.insert(design);
})
The problem now is that the design document is updated every time the server restarts (e.g it gets a new _rev on every restart).
Now... my question(s) :)
1: Is this a bad approach for bootstrapping my CouchDB with databases and designs? Should I consider some migration steps as part of my deployment process?
2: Is it a problem that my design doc gets many _revs, basically for every deployment and server restart? Even if the document itself hasn't changed? And if so, is there a way to only update the document if it changed? (I thought of manually setting the _rev to some value in my application but very unsure that would be a good idea).
Your approach seems quite reasonable. If the checks happen only at restarts, this won't even be a performance issue.
Too many _revs can become a problem. The history of _revs is kept as _revs_info and stored with the document itself (see the CouchDB docs for details). Depending on your setup, it might be a bad decision to create unnecessary revisions.
We had a similar challenge with some server-side scripts that required certain views. Our solution was to calculate a hash over the old and new design document and compare them. You can use any hashing function for this job, such as sha1 or md5.
Just remember to remove the _rev from the old document before hashing it, or otherwise you will get different hash values every time.
I tried the md5 comparison like #Bernhard Gschwantner suggested. But I ran into some difficulties because im my case I'd like to write the map/reduce functions in the design documents in pure javascript in my code.
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
while getting the design doc from CouchDb returns the map/reduce functions converted as strings:
...
"by_name": {
"map": "function (doc) {\n emit(doc.name, null);\n }"
},
...
Obviously md5 comparing does not really work here.
I ended up with the very simple solution by just putting a version number on the design doc:
const design = {
_id: "_design/foo",
version: 1,
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
When I update the design doc, I simply increment the version number and compare it with the version number in database:
const fooDesign = {...}
foo.get('_design/foo', (err, design) => {
if(err)
return foo.insert(fooDesign);
console.log('comparing foo design version', design.version, fooDesign.version);
if(design.version !== fooDisign.version) {
fooDesign._rev = design._rev;
foo.insert(fooDesign, (err) => {
if(err)
return console.log('error updating foo design', err);
console.log('foo design updated to version', fooDesign.version)
});
}
});
Revisiting your question again: In a recent project I used the great couchdb-push module by Johannes Schmidt. You get conditional updates for free, alongside with many other benefits inherited from its dependency couchdb-compile.
That library turned out to be a hidden gem for me. HIGHLY recommended!

Get the current browser name in Protractor test

I'm creating users in some test. Since it is connected to the backend and create real users I need fixtures. I was thinking of using the browser name to create unique user. However, It has proven to be quite difficult to get to it...
Anyone can point me in the right direction?
Another case of rubber ducking :)
The answer was actually quite simple.
in my onPrepare function I added the following function and it works flawlessly.
browser.getCapabilities().then(function (cap) {
browser.browserName = cap.caps_.browserName;
});
I can get access the name in my test using browser.browserName.
This has changed in version of protractor starting from 3.2 (selenium webdriver 2.52)
Now one should call:
browser.driver.getCapabilities().then(function(caps){
browser.browserName = caps.get('browserName');
}
If you want to avoid the a browser, you may want to do this:
it('User should see a message that he has already been added to the campaing when entering the same email twice', function () {
browser.getCapabilities().then(function (capabilities) {
browserName = capabilities.caps_.browserName;
platform = capabilities.caps_.platform;
}).then(function () {
console.log('Browser:', browserName, 'on platform', platform);
if (browserName === 'internet explorer') {
console.log('IE Was avoided for this test.');
} else {
basePage.email.sendKeys('bruno#test.com');
console.log('Mande el mail');
basePage.subscribe.click().then(function () {
basePage.confirmMessage('Contact already added to target campaign');
});
}
});
});

Resources