postgis loader issues metadata queries but not data query - postgis

EDIT 1: I just discovered postgis has an ST_AsMVT function built in, which does exactly what I want (I think), so I'm not going to use mapnik at all!
EDIT 2: unfortunately that function isn't yet in a released version of PostGIS, but hopefully it will be within the next few weeks.
Originally posted as a github issue.
When I do something like the following, with logging turned on for my DB, I see that some "metadata" requests are made to postgis, however, no actual data query is ever made.
The requests for metadata are (presumably) needed for the logic relating to field names/types and extent (when not explicitly provided).
var postgis = new mapnik.Datasource({
type: 'postgis',
host: ... etc,
table: 'some_geometry_table',
geometry_field: 'geom',
srid: 4326,
extent: "-180,-85.0511,180,85.0511",
estimate_extent: false,
row_limit: 10 // !! this doesn't seem to do anything
});
var map = new mapnik.Map(256, 256);
var layer = new mapnik.Layer('some_layer');
layer.datasource = postgis;
map.add_layer(layer);
map.render(new mapnik.VectorTile(z, x, y), {}, (err, vtile) => {
if (err) next(err);
var data = vtile.getDataSync({});
var file = path + z + "," + x + "," + y + ".pbf"
console.log(data);
console.log("written: " + file);
fs.writeFileSync(file, data);
next(null);
});
});
Metadata query as seen in postgres logs:
SELECT * FROM some_geometry_table LIMIT 0
What am I doing wrong?

Related

discord.js saving an attachment "undefined"?

I've had a problem recently with users trolling and then deleting images before I can see what they are. So I'm creating a log to download everything into a log. (yes I've instantiated fs.js already). For some reason though, when writing the file... the file is only 9 bytes big (and the content is just "undefined"). Please help.
var attachment = (message.attachments).array();
attachment.forEach(function(attachment) {
console.log(attachment.url);
tempName = attachment.url.split("/");
attachName = tempName[tempName.length-1]
console.log(attachName);
fs.writeFileSync(dir + "/" + attachName, attachment.file, (err) => {
// throws an error, you could also catch it here
if (err) throw err;
// success case, the file was saved
console.log('attachment saved!');
});
theLog += '<img src="'+ "attachments/" + message.channel.name + "/" + attachName + '"> \n';
//theLog += '<img src="'+ attachment.url + '"> \n';
})
Lets start with answering why it saves it as undefined.
If you check the docs for MessageAttachment message.attachments.first().file is undefined. there is fileName and fileSize but no file
To save the file you can do 2 things...
Saving the URLS.
You can save the url in an array in a JSON file like so:
JSON FILE
{
"images":[]
}
JS FILE
let imgs = require(JSON_FILE)
imgs.images.push(attachment.url);
fs.writeFile(JSON_FILE,JSON.stringify(imgs,null,4));
- Saving the IMAGE itself
You can use the request module to pull images from a url
JS FILE
//Start of code
let request = require(`request`);
let fs = require(`fs`);
//Later
request.get(attachment.url)
.on('error', console.error)
.pipe(fs.createWriteStream(`Img-${Date.now()}`));//The "Img-${Date.now}" Guarantees Unique file names.
EDIT: request is deprecated. It's been replaced by fetch I can't confirm this code work's with fetch but the underlining principle is the same.
I ended up solving it with a tiny function. Thanks everyone (especially the guy asking what a variable was... that was super helpful)
function downloadAttachment(url, dest, hash){
console.log('initiating download of '+ url +'...');
request(url).pipe(fs.createWriteStream(dest));
}
the "hash" variable is not used right now. I was hungry and craving corned beef hash...

Mapping relational MongoDB data in website : help needed

I run an Android app which locates OBJECTS with attributes like ID, Name, Owner, Type, Place_ID which are linked to PLACES, on a map. PLACES have attributes like ID, Latitude, Longitude, Opening Hour, Closing Hour,... The data is stored in a MongoDB on Back4App and I want to keep that way. I have one class for OBJECTS and one class for PLACES. The relation between OBJECTS and PLACES is not "a MongoDB relation", it is just a common String field in the OBJECTS and PLACES classes.
In order to allow offline access to the data and to minimize DB server requests, the app synchronizes a local SQLITE database on the device with the MongoDB online database. In the Android App, the queries are passed to the SQLITE DB.
I'm trying to make a website which does the same job as the app, which is displaying filtered data from the MongoDB.
I started with a simple html and javascript website using the Parse SDK, but I'm facing a few difficulties.
A simple query is to list all the OBJECTS in a 50km radius, i.e. I need the OBJECTS and the PLACE where they are located. However, where I could get this easilty with a SELECT...JOIN in SQLITE, I cannot get this information through a simple Parse query because I want to know the OBJECTS too. And I cannot run 2 asynchronous queries in a for loop.
What website architecture and/or languages would you recommend for this type of website ?
How would you recommend to proceed ?
Thanks in advance for your help.
EDIT: ZeekHuge opened my eyes on the bad design of not using pointers. After implementing pointers in my MongoDB, here's the lines of codes which did it for me :
Parse.initialize("", "");
Parse.serverURL = '';
var eiffel = new Parse.GeoPoint(48.858093, 2.294694);
var myScores = '';
var Enseigne = Parse.Object.extend("ENSEIGNE");
var Flipper = Parse.Object.extend("FLIPPER");
var query = new Parse.Query(Flipper);
var innerquery = new Parse.Query(Enseigne);
innerquery.withinKilometers("ENS_GEO",eiffel,500);
query.equalTo("FLIP_ACTIF", true);
query.include("FLIP_ENSPOINT");
query.include("FLIP_MODPOINT");
query.matchesQuery("FLIP_ENSPOINT", innerquery);
query.find({
success: function(results) {
for (var i = 0; i < results.length; i++) {
var object = results[i];
myScores += '<tr><td>' + object.get('FLIP_MODPOINT').get('MOFL_NOM')
+ '</td><td>' + object.get('FLIP_ENSPOINT').get('ENS_NOM')
+ '</td><td>' + object.get('FLIP_ENSPOINT').get('ENS_GEO').latitude
+ '</td><td>' + object.get('FLIP_ENSPOINT').get('ENS_GEO').longitude
+ '</td></tr>';
}
(function($) {
$('#results-table').append(myScores);
})(jQuery);
},
error: function(error) {
alert("Error: " + error.code + " " + error.message);
}
});
Solved by replacing the database keys by pointers and using the Innerquery and include functions. See exemple mentionned in question.

Node + SQL Server : Get row data as stream

I try to use Node.Js connector with Microsoft driver to communicate with a SQL Server. In the connector docs I've found a good option named 'stream'. It add ability to asynchronously obtain row objects.
My data have some specific - some columns have large binary data (> 100 Mb). So even one row may be really large. I'm looking for ability to get each row data as a stream. It is possible in .NET driver (CommandBehavior.SequentialAccess enumeration). Is it possible in Node.js?
UPDATED
Here is a code to demonstrate the problem:
Custom writable stream module:
var stream = require('stream');
var util = require('util');
function WritableObjects() {
stream.Writable.call(
this,
{
objectMode: true
}
);
}
util.inherits( WritableObjects, stream.Writable );
WritableObjects.prototype._write = function( chunk, encoding, doneWriting ) {
console.log('write', chunk, encoding);
doneWriting();
};
module.exports = {
WritableObjects: WritableObjects
};
and database query code:
var sw = new wstream.WritableObjects();
var request = new sql.Request(connection);
request.stream = true;
request.pipe(sw);
request.query('SELECT DataId, Data FROM ds.tData WHERE DataId in (1)');
sw.on('error', function(err) {
console.log('Stream err ', err)
});
sw.on('pipe', function(src) {
console.log('Stream pipe ')
});
sw.on('finish', function(data) {
console.log('Stream finish')
});
I this example chunk parameter of _write method contains the whole data of db record, not a stream. Because Data field contains a big varbinary data memory of node process also grows huge.
Yes you can stream query with the node-mssql package as stated here: https://github.com/patriksimek/node-mssql
stream - Stream recordsets/rows instead of returning them all at once as an argument of callback (default: false). You can also enable streaming for each request independently (request.stream = true). Always set to true if you plan to work with large amount of rows.

How can I make use of LOCAL VARIABLES within a heroku postgres query?

I have looked around in many MANY threads, and through various documentation however for what seems like such an incredibly SIMPLE task, this is driving me insane.
I have a node.js webapp which generates a userId upon login, and is stored within a session object.
req.user.id <== my local variable for the user id.
A snippet of the code I have so far is along these lines:
var query = client.query("SELECT * FROM programs WHERE authorid = req.user.id", function (err, result) {
if (err){
//Do erranous things
} else {
// Do good things
}
});
What am I doing wrong? How can I do this simple task of comparing a database entry to a value stored in a local variable?
Any / all help appreciated - I've been trying to do this for 6 hours.
From the github page for the Node.js PostgreSQL client, it looks like you can pass and use arguments like:
client.query("SELECT * FROM programs WHERE authorid = $1",
[req.user.id], function(err, result) { ...

Protractor console log

I want to output the text of a div in my protractor test, so far I have:
console.log(ptor.findElement(protractor.By.id('view-container')).getText());
but this outputs
[object Object]
I tried "toString()" and same result.
Is there a way to output the text to the console?
getText and most other Protractor methods return promises. You want to put your console.log statement inside the promise resolution:
Using the new Protractor syntax:
element(by.id('view-container')).getText().then(function(text) {
console.log(text);
});
this is pretty old, but as a former n00b at protractor, I wished there was more documentation.
you could also use:
element(by.id('view-container')).getText().then(console.log);
or what I like to do for readability is put all the objects on a page in their own function, section, or file:
//top declaration of variables
var viewContainer = element(by.id('view-container')).getText();
.... //bunch of code
....
viewContainer.then(console.log);
That will take care of most of your garden-variety debugging needs.
For promises in general, you could try using protractor.promise.all()
let's say you have two things that are both promises:
var getTime = element(by.xpath(theTimeXpath)).getText();
var getPageTitle = element(by.xpath(thePageTitle)).getInnerHtml();
protractor.promise.all([getTime, getPageTitle]).then(function(theResultArray){
var timeText = result[0];
var pageTitleInnerHtml = result[1];
console.log(timeText); // outputs the actual text
console.log(pageTitleInnerHtml); //outputs the text of the Inner html
});
This second method is useful for when things begin to get more complex. personally, however, I find other ways around this. Although it's not bad, it's kind of funky for other developers having to read my code.
I would like to suggest a small improvement to other answers.
short answer : I like to use browser.sleep(0).then(..); where I need to push something to protractor's flow.
it is generic and easy to move around.
tl;dr
so using the above, you can easily add a function on browser (or ptor) something like:
browser.log = function( logger, level, msg ){
browser.sleep(0).then(function(){ logger[level](msg); });
}
or something a bit more sophisticated with apply - but that depends on your logger.
you can obviously enhance that a bit to have logger like api
var logger = browser.getLogger('name');
should be implemented like (lets assume log4js)
browser.getLogger = function( name ){
var logger = require('log4js').getLogger(name);
function logMe( level ) {
return function(msg ){
browser.sleep(0).then(function(){ logger[level](msg); });
}
}
return { info : logMe('info'), ... }
}
basically, the sky is the limit.
I am sure there's a way to make my code a lot shorter, the point is using the sleep method as basis.
You could always assert that the text you get is the text you expect:
expect(element(by.id('view-container')).getText()).toBe('desired-text');
you can try this one:
const textInfo = element(by.id('view-container'));
console.log('text: ', textInfo.getText());
When user wants to log the expected and actual result in protractor always use then method implementation.
verifyDisplayedText(locator: Locator, expectedText: string) {
const text = this.getText(locator);
try {
text.then(function(value){
if (value.trim() === expectedText) {
verifyLog("VERIFICATION: PASSED. Expected: '" + expectedText + "' Actual: '" + value+"'");
} else {
errorLog("VERIFICATION: FAILED. Expected: '" + expectedText + "' Actual: '" + value+"'");
}
});
expect(text).toBe(expectedText);
}catch (error1) {
errorLog("VERIFICATION: FAILED. Expected: '" + expectedText + "' Actual: '" + text+"'");
throw error1;
}
}
If you're in 2021, you will want to read this answer
According to protractors documentation, .getText() returns a promise. Promise is an Object is javascript. So this is what you're logging.
You need to get the value off the promise, by resolving it
The best way to handle a promise, as of 2021, is to use async/await keywords. This will make Protractor 'freeze' and wait, until the promise is resolved before running the next command
it('test case 1', async () => {
let text = await ptor.findElement(protractor.By.id('view-container')).getText();
console.log(text);
// or directly
console.log(await ptor.findElement(protractor.By.id('view-container')).getText(););
})
.then() can also be used, but using async/await will make your code a lot more readable and easier to debug.

Resources