Redeem Command (discord.js) - discord.js

Hello my name is jake I'm looking for help on creating a redemption command in discord.js, Baseline idea.
!redeem XXX-XXX-XXXX
If it's a valid code then it runs a line of code bla bla, (Enables the bot premium features for that server and makes the code invalid).
I was think in using a generator and generating a few thousand codes and storing it in a .env file, so how would I exactly code this?

I think the best approach is to generate codes and save them in a database table. So if a user tries to redeem a code, you can query the database and see if it exists.
For database I recommend sqlite3 since it's lightweight.
For generating codes there is a good library you can use.
For example,
db.run('CREATE TABLE promo_codes(code text)');
// generating and saving a promo code
const addCode = () => {
const code = voucher_codes.generate({
pattern: "###-###-####",
});
db.run("INSERT INTO promo_codes(code) VALUES (?)",code, function(err){
if (err) {
return console.log(err.message);
}
});
}
// generating 1000
for (let i = 0; i < 1000; i++) {
addCode()
}

Related

Having problems making a command that allows people to turn the blacklist on and off

I thought that allowing people to turn my blacklist on and off for their server would be kinda neat, but I didn't find much success so far. Also I was wondering, is it hard to make a command that allows people to add their own words to the blacklist? Here's the code:
let blacklisted = ['bad words here']
let foundInText = false;
for (var i in blacklisted) {
if (message.content.toLowerCase().includes(blacklisted[i].toLowerCase())) foundInText = true;
}
if (foundInText) {
message.delete();
}
});
You could use a variable for turning on or off the blacklist. BacklistOn = true, And use a if statement before the code you copied in here. Then make a command that changes that variable. You would need to store the variable in a JSON file if you want the setting to be saved when restarting the bot or if the bot crashes. Tutorial on reading/writing to JSON with node.js https://medium.com/#osiolabs/read-write-json-files-with-node-js-92d03cc82824

Correct way to check if DocumentDB object exists

In Microsoft examples I saw two ways to check if DocumentDb object like Database, DocumentCollection, Document etc. exists :
First is by creating a query:
Database db = client.CreateDatabaseQuery().Where(x => x.Id == DatabaseId).AsEnumerable().FirstOrDefault();
if (db == null)
{
await client.CreateDatabaseAsync(new Database { Id = DatabaseId });
}
The second one is by using "try catch" block:
try
{
await this.client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(databaseName));
}
catch (DocumentClientException de)
{
if (de.StatusCode == HttpStatusCode.NotFound)
{
await this.client.CreateDatabaseAsync(new Database { Id = databaseName });
}
else
{
throw;
}
}
What is the correct way to do this procedure in terms of performance?
You should use the new CreateDatabaseIfNotExistsAsync in the DocumentDB SDK instead of both these approaches, if that's what you're trying to do.
In terms of server resources (request units), a ReadDocumentAsync is slightly more lightweight than CreateDatabaseQuery, so you should use that when possible.
I've just seen the try/catch example in one of the Microsoft provided sample project and it got me baffled, as it is plain wrong: you don't use try/catch for control flow.
Never.
This is just bad code. The new SDK provides CreateDatabaseIfNotExistsAsync which I can only hope doesn't just hide this shit. In older lib just use the query approach, unless you want to get shouted at by whoever is going to review the code.

Removing created records in e2e tests after all tests have run

I have been looking around for suitable ways to 'clean up' created records after tests are run using Protractor.
As an example, I have a test suite that currently runs tests on create and update screens, but there is currently no delete feature, however there is a delete end point I can hit against the backend API.
So the approach I have taken is to record the id of the created record so that in an afterAll I can then issue a request to perform a delete operation on the record.
For example:
beforeAll(function() {
loginView.login();
page.customerNav.click();
page.customerAddBtn.click();
page.createCustomer();
});
afterAll(function() {
helper.buildRequestOptions('DELETE', 'customers/'+createdCustomerId).then(function(options){
request(options, function(err, response){
if(response.statusCode === 200) {
console.log('Successfully deleted customer ID: '+ createdCustomerId);
loginView.logout();
} else {
console.log('A problem occurred when attempting to delete customer ID: '+ createdCustomerId);
console.log('status code - ' + response.statusCode);
console.log(err);
}
});
});
});
//it statements below...
Whilst this works, I am unsure whether this is a good or bad approach, and if the latter, what are the alternatives.
I'm doing this in order to prevent a whole load of dummy test records being added over time. I know you could just clear down the database between test runs, e.g. through a script or similar on say a CI server, but it's not something I\we have looked into further. Plus this approach seems on the face of it simpler, but again I am unsure about the practicalities of such an approach directly inside the test spec files.
Can anyone out there provide further comments\solutions?
Thanks
Well, for what it's worth I basically use that exact same approach. We have an endpoint that can reset data for a specific user based on ID, and I hit that in a beforeAll() block as well to reset the data to an expected state before every run (I could have done it afterAll as well, but sometimes people mess with the test accounts so I do beforeAll). So I simply grab the users ID and send the http request.
I can't really speak to the practicality of it, as it was simply a task that I accomplished and it worked perfectly for me so I saw no need for an alternative. Just wanted to let you know you are not alone in that approach :)
I'm curious if other people have alternative solutions.
The more robust solution is to mock your server with $httpBackend so you don't have to do actual calls to your API.
You can then configure server responses from your e2e test specs.
here's a fake server example :
angular.module('myModule')
.config(function($provide,$logProvider) {
$logProvider.debugEnabled(true);
})
.run(function($httpBackend,$log) {
var request = new RegExp('\/api\/route\\?some_query_param=([^&]*)');
$httpBackend.whenGET(request).respond(function(method, url, data) {
$log.debug(url);
// see http://stackoverflow.com/questions/24542465/angularjs-how-uri-components-are-encoded/29728653#29728653
function decode_param(param) {
return decodeURIComponent(param.
replace('#', '%40').
replace(':', '%3A').
replace('$', '%24').
replace(',', '%2C').
replace(';', '%3B').
replace('+', '%20'));
}
var params = url.match(request);
var some_query_param = decodeURIComponent(params[1]);
return [200,
{
someResponse...
}, {}];
});
});
Then load this script in your test environnement and your done.

couchdb update design doc

I have a nodejs application where i connect to my couchdb using nano with the following script:
const { connectionString } = require('../config');
const nano = require('nano')(connectionString);
// creates database or fails silent if exists
nano.db.create('foo');
module.exports = {
foo: nano.db.use('foo')
}
This script is running on every server start, so it tries to create the database 'foo' every time the server (re)starts and just fails silently if the database already exists.
I like this idea a lot because this way I'm actually maintaining the database at the application level and don't have to create databases manually when I decide to add a new database.
Taking this approach one step further I also tried to maintain my design docs from application level.
...
nano.db.create('foo');
const foo = nano.db.use('foo');
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
foo.insert(design, (err) => {
if(err)
console.log('design insert failed');
})
module.exports = {
foo
}
Obviously this will only insert the design doc if it doesn't exist. But what if I updated my design doc and want to update it?
I tried:
foo.get("_design/foo", (err, doc) => {
if(err)
return foo.insert(design);
design._rev = doc._rev
foo.insert(design);
})
The problem now is that the design document is updated every time the server restarts (e.g it gets a new _rev on every restart).
Now... my question(s) :)
1: Is this a bad approach for bootstrapping my CouchDB with databases and designs? Should I consider some migration steps as part of my deployment process?
2: Is it a problem that my design doc gets many _revs, basically for every deployment and server restart? Even if the document itself hasn't changed? And if so, is there a way to only update the document if it changed? (I thought of manually setting the _rev to some value in my application but very unsure that would be a good idea).
Your approach seems quite reasonable. If the checks happen only at restarts, this won't even be a performance issue.
Too many _revs can become a problem. The history of _revs is kept as _revs_info and stored with the document itself (see the CouchDB docs for details). Depending on your setup, it might be a bad decision to create unnecessary revisions.
We had a similar challenge with some server-side scripts that required certain views. Our solution was to calculate a hash over the old and new design document and compare them. You can use any hashing function for this job, such as sha1 or md5.
Just remember to remove the _rev from the old document before hashing it, or otherwise you will get different hash values every time.
I tried the md5 comparison like #Bernhard Gschwantner suggested. But I ran into some difficulties because im my case I'd like to write the map/reduce functions in the design documents in pure javascript in my code.
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
while getting the design doc from CouchDb returns the map/reduce functions converted as strings:
...
"by_name": {
"map": "function (doc) {\n emit(doc.name, null);\n }"
},
...
Obviously md5 comparing does not really work here.
I ended up with the very simple solution by just putting a version number on the design doc:
const design = {
_id: "_design/foo",
version: 1,
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
When I update the design doc, I simply increment the version number and compare it with the version number in database:
const fooDesign = {...}
foo.get('_design/foo', (err, design) => {
if(err)
return foo.insert(fooDesign);
console.log('comparing foo design version', design.version, fooDesign.version);
if(design.version !== fooDisign.version) {
fooDesign._rev = design._rev;
foo.insert(fooDesign, (err) => {
if(err)
return console.log('error updating foo design', err);
console.log('foo design updated to version', fooDesign.version)
});
}
});
Revisiting your question again: In a recent project I used the great couchdb-push module by Johannes Schmidt. You get conditional updates for free, alongside with many other benefits inherited from its dependency couchdb-compile.
That library turned out to be a hidden gem for me. HIGHLY recommended!

How can I make use of LOCAL VARIABLES within a heroku postgres query?

I have looked around in many MANY threads, and through various documentation however for what seems like such an incredibly SIMPLE task, this is driving me insane.
I have a node.js webapp which generates a userId upon login, and is stored within a session object.
req.user.id <== my local variable for the user id.
A snippet of the code I have so far is along these lines:
var query = client.query("SELECT * FROM programs WHERE authorid = req.user.id", function (err, result) {
if (err){
//Do erranous things
} else {
// Do good things
}
});
What am I doing wrong? How can I do this simple task of comparing a database entry to a value stored in a local variable?
Any / all help appreciated - I've been trying to do this for 6 hours.
From the github page for the Node.js PostgreSQL client, it looks like you can pass and use arguments like:
client.query("SELECT * FROM programs WHERE authorid = $1",
[req.user.id], function(err, result) { ...

Resources