The storage file is not properly formatted (Unexpected end of JSON) discord.js - discord

I'm working on a giveaway bot and got that error. This is the full error
(node:4) UnhandledPromiseRejectionWarning: SyntaxError: The storage file is not properly formatted (Unexpected end of JSON input).
at GiveawaysManager.getAllGiveaways (/app/node_modules/discord-giveaways/src/Manager.js:308:27)
at async GiveawaysManager._init (/app/node_modules/discord-giveaways/src/Manager.js:391:30)
Here is my code:
const { GiveawaysManager } = require("discord-giveaways");
const manager = new GiveawaysManager(bot, {
storage: "./giveaways.json",
updateCountdownEvery: 10000,
default: {
botsCanWin: false,
embedColor: "#FF0000",
reaction: "🎉"
}
})
bot.giveawaysManager = manager;
}
})
I'm new to coding so it wil be great if you explain in baby steps

I had this same issue, and it turns out that the giveaways.json file only had a [] in it.
It is best if you just don't add a file because the module should add one for you!

The storage file is your ./giveaways.json. As seen in the npm documentation, it saves the file in JSON format which is highly likely to go wrong, make sure that you haven't touched the giveaways.json file, much less change it. Adding even a single line in the giveaways.json file may cause the GiveawaysManager to append wrongly and not be able to read it.
My suggestion is basically delete the ./giveaways.json file. This should refresh the file and be rid of all syntax errors unless it was from the npm module itself. Note that deleting it, will delete and therefore, stop all the giveaways in process, so make sure you have no giveaways in progress.
If this doesn't fix the issue, then delete the ./giveaways.json file again, and create a new ./giveaways.json file with this as it's contents:
{}

Related

How to get information about the file

I am able to open and stream the file no issue by using the following, however I need to be able to use the file information that is stored inside the bucket.
const db = connection.connections[0].db
const bucket = new mongoose.mongo.GridFSBucket(db, {
bucketName: bucketName
});
bucket.openDownloadStreamByName(filename).pipe(res)
For example I would like to be able to set the following
res.setHeader('Content-Type', (TYPE)),
res.setHeader('Content-Length', (LENGTH)),
I am wondering the following above allows options however I don't know if the pipe stops us from setting the content-type and length after it starts piping.
According to docs, no you can't get file info from stream but in source code seems you can.
According to this and this, you could get contentType by accessing
bucket.openDownloadStreamByName(...).s.files[0].contentType
or
bucket.openDownloadStreamByName(...).s.file?.contentType

SAD PANDA: TypeError: failed to fetch

​ === SAD PANDA ===
TypeError: Failed to fetch
=== SAD PANDA ===
While executing a flow cadence transaction in react.js, I got the above error.
My intention is when I click the minttoken button, this transaction has to execute so as to mint the NFT.
const mintToken = async() => {
console.log(form.name)
const encoded = await fcl.send([
fcl.proposer(fcl.currentUser().authorization),
fcl.payer(fcl.authz),
fcl.authorizations([fcl.authz]),
fcl.limit(50),
fcl.args([
fcl.arg(form.name,t.String),
fcl.arg(form.velocity,t.String),
fcl.arg(form.angle,t.String),
fcl.arg(form.rating,t.String),
fcl.arg(form.uri,t.String)
]),
fcl.transaction`
import commitContract from 0xf8d6e0586b0a20c7
transaction {
let receiverRef: &{commitContract.NFTReceiver}
let minterRef: &commitContract.NFTMinter
prepare(acct: AuthAccount) {
self.receiverRef = acct.getCapability<&{commitContract.NFTReceiver}>(/public/NFTReceiver)
.borrow()
?? panic("Could not borrow receiver reference")
self.minterRef = acct.borrow<&commitContract.NFTMinter>(from: /storage/NFTMinter)
?? panic("could not borrow minter reference")
}
execute {
let metadata : {String : String} = {
"name": name,
"swing_velocity": velocity,
"swing_angle": angle,
"rating": rating,
"uri": uri
}
let newNFT <- self.minterRef.mintNFT()
self.receiverRef.deposit(token: <-newNFT, metadata: metadata)
log("NFT Minted and deposited to Account 2's Collection")
}
}
`
]);
await fcl.decode(encoded);
}
this error being so useless is my fault, but I can explain what is happening here because it also only happens in a really specific situation.
Sad Panda error is a catch all error that happens when there is a catastrophic failure when fcl tries to resolve the signatures and it fails in a completely unexpected way. At the time of writing this it usually shows up when people are writing their own authorization functions so that was the first thing i looked at in your code example. Since you are using fcl.authz and fcl.currentUser().authorization (both of those are the same by the way) your situation here isnt because of a custom authorization function, which leads me to believe this is either a configuration issue (fcl.authz is having a hard time doing its job correctly) or what fcl is getting back from the wallet doesn't line up with what it is expecting internally (most likely because of an out of date version of fcl).
I have also seen this come up when the version of the sdk that fcl uses doesnt line up with the version of the sdk that is there (because some people have added #onflow/sdk as well as #onflow/fcl) so would also maybe check to make sure you only have fcl in your package.json and not the sdk as well (everything you should need from the sdk should be exposed from fcl directly, meaning you shouldnt need the sdk as a direct dependency of your application)
I would first recommend making sure you are using the latest version of fcl (your code should still all work), then i would make sure you are only using fcl and not inadvertently using an older version of the sdk. If you are still getting the same error after that could you create an issue on the github so we can dedicate some resources to helping sort this out (and make it so you and others dont see this cryptic error in future versions of fcl)

Getting imported .less files to update with au run --watch in Aurelia

Sorry, this is quite hard to explain but I want to be able to change any less file (including imported ones) and have it update the site via watch.
I have a root level (from src) site.less file.
I also have various less files scattered around the src folder and am importing them from in the site.less file.
When I run au run --watch this inital build is updating css for my application successfully.
When I then update a less file it is triggering a refresh but the site is not updated with the changes.
In an attempt to resolve this, I have changed the watch file in aurelia_project/tasks/watch. -
I have made it so that when a less file is changed it only adds my root less file to the pendingRefreshPaths instead.
Now when I change and save an imported less file it is triggering a watch refresh adding in the appropriate files (including the root) but still not updating the site.
If I then open the root less file and save with no changes it is doing the same, no changes are shown.
The strange thing and clue to where I think I need to look is if I CHANGE the contents of the root less file and THEN save, it all works as expected.
As such I think I need to trick somewhere in the pipeline to make it think there is a real change to the root less file so that watches from other less files are actually successful.
Any ideas where it is ignoring unchanged files despite being in the pendingRefreshPaths?
Found the culprit, it is the changedInPlace function that was filtering it out.
As such if I remove this, now when I change any .less file it ques up the root .less file instead and this obviously imports and compiles the other files successfully.
export default function processCSS() {
return gulp.src(project.cssProcessor.source) <--- remove this
//.pipe(changedInPlace({firstPass:true}))
.pipe(plumber({ errorHandler: notify.onError('Error: <%= error.message %>') }))
.pipe(sourcemaps.init())
.pipe(less())
.pipe(build.bundle());
}
watch.ts
let watch = (callback?) => {
watchCallback = callback || watchCallback;
return gulpWatch(
Object.keys(watches),
{
read: false, // performance optimization: do not read actual file contents
verbose: true
},
(vinyl) => {
if (vinyl.path && vinyl.cwd && vinyl.path.startsWith(vinyl.cwd)) {
let pathToAdd = vinyl.path.substr(vinyl.cwd.length + 1);
if (pathToAdd.endsWith(".less")) {
log(`Watcher: Adding path src\\site.less to pending build changes...`);
// Crude but could be moved to config to define a root
pendingRefreshPaths.push("src\\site.less");
}
else {
log(`Watcher: Adding path ${pathToAdd} to pending build changes...`);
pendingRefreshPaths.push(pathToAdd);
}
refresh();
}
});
};

PHPExcel Load error - Cell coordinate must be a range of cells

Good Afternoon All,
I am working on an issue in PHPExcel. Using the following code:
try {
$inputFileType = PHPExcel_IOFactory::identify($fileLocation);
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
$objReader->setReadDataOnly(true);
$objPHPExcel = $objReader->load($fileLocation);
} catch(Exception $e) {
die('ERROR LOADING FILE: "'.print_r(pathinfo($fileLocation),true).'": '.$e->getMessage());
} # end try catch
This responses with a this error message:
ERROR LOADING FILE: "Array ( [dirname] => upload [basename] => d10f8...188 [filename] => d10f8....188 ) ": Cell coordinate must be a range of cells.
Which makes no sense since I am not reading the file yet, only loading it. This code has been in place and working without issue for months (Probably 100+ uses), only one file is causing this error. The file is a Office2007 XLSX (Just like all the others), I have converted the file to multiple other formats (xls, xlt, xlsm) but none of copies will load either. I have found nothing of interest in the file that could explain this behavior.
I have not found anything in my logs and am at a loss to understand the error message of 'Cell coordinate must be a range of cells'. I have isolated the code and made sure that this error message is being generated during this try/catch and is not coming from somewhere else.
Any help would be greatly appreciated,
Paul
This error was caused by a print area being defined in one of sheets. I removed all print areas using these instructions (https://support.office.com/en-us/article/Change-or-clear-a-print-area-on-a-worksheet-deed3c1f-d2ca-4b78-b28d-9c17f0b5de34#bmclearprintarea) and then reran the upload and everything worked. Thanks to MarkBaker for his assistance.
Paul

Spontaneous Server Errors During AngularJS $http calls

I'm building an SPA in AngularJS served by a Laravel (5.1) backend. Of late I've been encountering an annoying error, a server 500 or code 0 error which is abit hard to explain how it comes but let me try to may be someone will understand the dental formula of my problem.
When i start my AngularJS controller, I make several server calls (via independent $http calls from services) to retrieve information i might later need in the controller. For example,
Functions.getGrades()
.then(function(response)
{
$scope.grades = response.data;
});
Subjects.offered()
.then(function(response)
{
$scope.subjects = response.data;
});
Later on i pass these variables (grades or subjects) to a service where they are used for processing. However, these functions are randomly returning code 500 server errors after they run, and sometimes returning status code 0 after running. This happens in a random way and it is hard for me to point out the circumstances leading to their popping up. This leaves me with frequent empty Laravel-ised error screens like the ones shown below.
Anyone reading my mind?
Ok, after a suggestion given in a comment above that I check my Laravel log files (located in storage/logs/laravel.log- Laravel 5.1), i found out that the main error most of these times was this one: 'PDOException' with message 'SQLSTATE[HY000] [1044] Access denied for user ''#'localhost' to database 'forge'' in ..., plus another one that paraphrased something like No valid encrypter found. These were the key opener.
On reading another SO thread here, it said in part:
I solved, sometimes laravel not read APP_KEY in .ENV. And returns a value "SomeRandomString" (default is defined in config / app.php), and have the error "key length is invalid", so the solution is to copy the value of APP_KEY, to the value 'key 'in config / app.php, that's all! I solved!
That was exactly the issue! When loading the DB params from the .env to config/database.php, Laravel was sometimes unable to read the environment variables and went for the fallback default fallback options (forge for DB name and username and SomeRandomString for the APP_KEY). So, to solve this i just did as advised: copied the APP_KEY in .env to the config/app.php and edited the default DB parameters to the actual DB name and username/password I'm using. Just that and i was free from pollution. Hope someone finds this helpful.

Resources