Cloud Tasks are stuck in queue and are not executed - google-app-engine

I am using Cloud Functions to put tasks into Cloud Tasks Queue and invoke a service (worker) function. Both the task generator and task Handler functions are deployed to Cloud Functions.
This is my createTask.js:
const {CloudTasksClient} = require('#google-cloud/tasks');
const client = new CloudTasksClient();
exports.createTask = async (req, res) => {
const location = 'us-central1';
const project = 'project-id';
const queue = 'queueid';
const payload = 'Hello, World!';
const parent = client.queuePath(project, location, queue);
const task = { appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri : '/log_payload'},
const [ response ] = await tasksClient.createTask({ parent: queuePath, task })
if (payload) {
task.appEngineHttpRequest.body = Buffer.from(payload).toString('base64');
}
let inSeconds = 0 ;
if (inSeconds) {
// The time when the task is scheduled to be attempted.
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
console.log('Sending task:');
console.log(task);
// Send create task request.
const request = {parent: parent, task: task};
const [response] = await client.createTask(request);
const name = response.name;
console.log(`Created task ${name}`);
res.send({message : "Ok"});
}
server.js
const express = require('express');
const app = express();
app.enable('trust proxy');
app.use(bodyParser.raw({type: 'application/octet-stream'}));
app.get('/', (req, res) => {
// Basic index to verify app is serving
res.send('Hello, World!').end();
});
app.post('/log_payload', (req, res) => {
// Log the request payload
console.log('Received task with payload: %s', req.body);
res.send(`Printed task payload: ${req.body}`).end();
});
app.get('*', (req, res) => {
res.send('OK').end();
});
app.listen(3000 , () => {
console.log(`App listening on port`);
console.log('Press Ctrl+C to quit.');
});
When I run trigger the task generator function via HTTP trigger in Postman, the task is added to the queue but it stays there forever.
The queue looks like this:
The logs of the handler task show it was never triggered. The task in the queue cannot reach its handler.
The logs of task in queue looks like this:
The task is failed and is in the queue:
enter image description here

I have tried to reproduce the issue by following doc. Successfully tasks are executed.I assume you also followed Cloud Task quickstart & Github code samples for set-up. This quickstart attempts to set up following components -
a) Create Task (~ createTask.js) - This can be either run locally or deployed as a Cloud function. In your case, this has been created as a Cloud Function.
b) Task Queue Creation - This is the creation of a Cloud Task queue.
c) Task Target / Handler (~ server.js) - The quickstart assumes this component to be deployed as an App Engine worker instance. This can also be seen in the corresponding Task Creation script (~ createTask.js).
Based on the description, assuming you are deploying the Task Target / Handler also as a cloud function If this assumption is correct then you need to follow this public doc to create a HTTP Target Task which uses "httpRequest" instead of "appEngineHttpRequest" construct. There is also a tutorial, that you may find helpful.
If you are using Cloud Functions instead of App Engine, Target for Tasks is also supported by the error - "404 - Not Found" in those screenshots you provided. This error signifies that the target App Engine instance endpoint (~ log_payload) is not found. This is also the reason why the task is not getting executed.
I suggest you to try out the above steps if those does not help I think you may raise support case here as your issue seems to require more in-depth analysis in your project logs to see why task queues are not being triggered.

Related

What is best practice for for testing fulfilled chainlink oracle requests ethers/hardhat?

I am using hardhat with ethers on rinkeby to test a smart contract that makes a a get request to a local chainlink node. I can observe on the node dashboard that the request is fulfilled.
I am struggling to write a test that waits for the 2nd fulfillment transaction to be confirmed.
I see similar tests in the SmartContractKit/chainlink repo tests
it("logs the data given to it by the oracle", async () => {
const tx = await oc.connect(roles.oracleNode).fulfillOracleRequest(...convertFufillParams(request, response));
const receipt = await tx.wait();
assert.equal(2, receipt?.logs?.length);
const log = receipt?.logs?.[1];
assert.equal(log?.topics[2], response);
});
I fail to see that this would wait for the fulfilled transaction at all. In the consumer.sol this function calls there is an event RequestFulfilled, that is emit, but it doesn't seem like this test is listening to it.
Another example I found, ocean protocol request test, accomplishes this by creating a mapping of request id's, an accessor, and a while loop in the test the polls until the request id is found.
it("create a request and send to Chainlink", async () => {
let tx = await ocean.createRequest(jobId, url, path, times);
request = h.decodeRunRequest(tx.receipt.rawLogs[3]);
console.log("request has been sent. request id :=" + request.id)
let data = 0
let timer = 0
while(data == 0){
data = await ocean.getRequestResult(request.id)
if(data != 0) {
console.log("Request is fulfilled. data := " + data)
}
wait(1000)
timer = timer + 1
console.log("waiting for " + timer + " second")
}
});
This makes sense, and I see how it works. However I would like to avoid creating a mapping, and accessor when I imagine there has got to be a more optimal way.
You'd want to look at the hardhat-starter-kit to see examples of working with Chainlink/oracle API responses.
For unit tests, you'd want to just mock the API responses from the Chainlink node.
For integration tests (for example, on a testnet) you'd add some wait parameter for a return. In the sample hardhat-starter-kit, it just waits x number of seconds, but you could also code your tests to listen for events to know when the oracle has responded. This does use events to get the requestId, however, you actually don't have to make a the event yourself, as the Chainlink core code already has this.
it('Should successfully make an external API request and get a result', async () => {
const transaction = await apiConsumer.requestVolumeData()
const tx_receipt = await transaction.wait()
const requestId = tx_receipt.events[0].topics[1]
//wait 30 secs for oracle to callback
await new Promise(resolve => setTimeout(resolve, 30000))
//Now check the result
const result = await apiConsumer.volume()
console.log("API Consumer Volume: ", new web3.utils.BN(result._hex).toString())
expect(new web3.utils.BN(result._hex)).to.be.a.bignumber.that.is.greaterThan(new web3.utils.BN(0))
})

Can't figure out where to initiate CronJob in react app

I have a react app, which must perform a weekly task every Monday #7:58 am. The task is setup as a separate function "notification()". And I want to use the 'CRON' package from NPM to call notification() at the appropriate time.
I have CRON wrapped inside of a function like this:
let mondayNotif = () => {
new CronJob('* 58 7 * * 2', function() {
notification()
}, null, true, 'America/Los_Angeles');
};
My question: where should I call the function mondayNotif(), to make sure that the CronJob is initiated correctly? I thought at first it must be on the backend, but the NPM package doesn't seem to support server-side. But if I call mondayNotif() on the client side, will the CronJob still happen if the site is inactive?
From what I know React JS is front end - it runs on client side. You need a server. In this case a node.js based server. Theroetically if nobody opens the website nothing will be fired up in react js. Look up how to schedule cron jobs on node.js
enter link description here
I found my own answer. But first, a few insights on CronJobs that would have helped me:
CronJobs are essentially a third-party function with an embedded clock. Once they are "initiated", you don't have to call them. The third-party calls them remotely, based on the time that you scheduled in the parameters (ie: "30 6 * * 5").
There is some discrepancy in different blogs about the CRON time. For instance some blogs insisted there are 6 time variables, but I found it worked with 5.
CronJobs should be in a separate file from the body of your main code, typically at the top of your folder structure near your "package.json" & "server.js" files.
It seems to be cleanest to setup all of your CronJob utilities directly inside the cronjob.js file. For instance: I used a separate database connection directly in cronjob.js and by-passed the api routes completely.
CronJobs should be initiated exactly once, at the beginning of the app launch. There are a few ways to do this: package.json or server.js are the most obvious choices.
Here is the file structure I ended up using:
-App
--package.json
--server.js
--cronjob.js
--/routes
--/src
--/models
--/public
...And then I imported the cronjob.js into "server.js". This way the cronjob function is initiated one time, when the server.js file is loaded during "dev" or "build".
For reference, here's the raw cronjob.js file (this is for an email notification):
const CronJob = require('cron').CronJob;
const Department = require('./models/department.js');
const template_remind = require('./config/remindEmailTemplate.js');
const SparkPost = require('sparkpost');
const client = new SparkPost('#############################');
const mongoose = require("mongoose");
const MONGODB_URI =
process.env.MONGODB_URI || "mongodb://localhost:27017/app";
mongoose.Promise = Promise;
// -------------------------- MongoDB -----------------------------
// Connect to the Mongo DB
mongoose.connect(MONGODB_URI, { useNewUrlParser: true }, (err, db) => {
if (err) {
console.log("Unable to connect to the mongoDB server. Error:", err);
} else {
console.log("Connection established to", MONGODB_URI);
}
});
const db = mongoose.connection;
// Show any mongoose errors
db.on("error", error => {
console.log("Mongoose Error: ", error);
});
// Once logged in to the db through mongoose, log a success message
db.once("open", () => {
console.log("Mongoose CRON connection successful.");
});
// ------------------------ Business Logic --------------------------
function weekday(notifications) {
Department.find({"active": true, "reminders": notifications, "week": {$lt: 13}}).distinct('participants', function(err, doc) {
if(err){
// console.log("The error: "+err)
} else {
console.log("received from database... "+JSON.stringify(doc))
for(let i=0; i<doc.length; i++){
client.transmissions.send({
recipients: [{address: doc[i]}],
content: {
from: 'name#sparkmail.email.com',
subject: 'Your email notification',
html: template_remind()
},
options: {sandbox: false}
}).then(data => {})
}
}
})
}
function weeklyNotif() {
new CronJob('45 7 * * 1', function() {weekday(1)}, null, true, 'America/New_York');
new CronJob('25 15 * * 3', function() {weekday(2)}, null, true, 'America/New_York');
new CronJob('15 11 * * 5', function() {weekday(3)}, null, true, 'America/New_York');
}
module.exports = weeklyNotif()
As you can see, I setup a unique DB connection and email server connection (separate from my API file), and ran all of the logic inside this one file, and then exported the initiation function.
Here's what appears in server.js:
const cronjob = require("./cronjob.js");
All you have to do here is require the file, and because it is exported as a function, this automatically initiates the cronjob.
Thanks for reading. If you have feedback, please share.
Noway, do call CronJob from client-side, because if there are 100 users, CronJob will be triggered 100 times. You need to have it on Server-Side for sure

JWT Authentication in Google Cloud Functions

I'm having trouble troubleshooting the cause for a 403 response from the Google Dataflow API while called using the module "googleapis" inside a Google Cloud Function.
The code works when run on my PC using the same code that is being run on Cloud Functions.
The JWT .json file is being retrieved from an object stored on a Google Storage bucket.
The code looks like this:
...
return getToken(). //Retrieves the JWT Client from Google Storage
then(function (jwtToken) {
console.log("Token: ", JSON.stringify(jwtToken));
return dataFlowList({
projectId: adc.projectId,
auth: jwtToken,
filter: "TERMINATED"
}).then(list => filterDataflowJobList(list))
...
Here the getToken function:
...
let storage: CloudStorage.Storage = CloudStorage({
projectId: adc.projectId
});
var bucket: CloudStorage.Bucket = storage.bucket(bucketName);
var bucketGetFiles = PromiseLab.denodeify(bucket.getFiles);
var stream = bucket.file(jwtJsonFileName).createReadStream();
return toString(stream)
.then(function (msg) {
var jsonJwt = JSON.parse(msg);
var jwtClient = new google.auth.JWT(
jsonJwt.client_email,
null,
jsonJwt.private_key,
['https://www.googleapis.com/auth/cloud-platform'], // an array of auth scopes
null
);
return jwtClient;
}).catch(function (error) {
console.log("Error while trying to retrieve JWT json");
throw error;
})
}
...
I'm based in EU and Cloud Functions are US-bound, could that be the case?
Dataflow jobs are also run in US
While running on Google Function, the authentication retrieval method I'm using is not retrieving the projectId, hence the unauthorized.
async function getADC() {
// Acquire a client and the projectId based on the environment. This method looks
// for the GCLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS environment variables.
const res = await auth.getApplicationDefault();
let client = res.credential;
// The createScopedRequired method returns true when running on GAE or a local developer
// machine. In that case, the desired scopes must be passed in manually. When the code is
// running in GCE or a Managed VM, the scopes are pulled from the GCE metadata server.
// See https://cloud.google.com/compute/docs/authentication for more information.
if (client.createScopedRequired && client.createScopedRequired()) {
// Scopes can be specified either as an array or as a single, space-delimited string.
const scopes = ['https://www.googleapis.com/auth/cloud-platform'];
client = client.createScoped(scopes);
}
return {
client: client,
projectId: res.projectId
}
}
I discovered it by looking at the Header request in the error log, it was in the form of: url: 'https://dataflow.googleapis.com/v1b3/projects//jobs' (notice the double "//" between projects and jobs.

Firebase Cloud Functions Not Running

I'm getting some unexpected behavior from Firebase Cloud Functions where it seems the function below does not run. My expectation is the data in the /posts endpoint will be logged to the console. I get no errors on deploying the function.
The function is for a backend-only action that the client/user is not involved in, so a trigger based on database events or https wont work for me without setting up another server to call the endpoint.
Is there any reason why the below would not log ?
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
getScheduledPosts = () => {
admin.database().ref("/posts")
.orderByKey()
.once("value")
.then( (snapshot) => {
console.log(snapshot);
})
.catch(err => {console.log(err)});
console.log("Posts Ran")
}
// Call this function
getScheduledPosts();
You're not defining a Cloud Function at all here. Because you don't have any Cloud Functions defined, the code you've written will never run. You have to export one from your index.js, and its definition has to be built using the firebase-functions SDK. If you're trying to create a database trigger (definitely read the docs there), it looks something like this:
exports.makeUppercase = functions.database.ref('/posts/{id}')
.onWrite(event => {
// do stuff here
})
Don't try to do "one-off" work that should be run when a function is deployed. That's not how Cloud Functions works. Functions are intended to be run in response to events that occur in your project.

In GAE Channel API the onmessage is not called

I am building an app for GAE using python API. It is running here. It is a multi-player game. I use the Channel API to communicate game state between players.
But in the app engine the onmessage handler of the channel is not called. The onopen handler is called. onerror or onclose are not called as well. Weird thing is this works perfectly in the local development server.
Is it possible that something like this can work on the development server but not in the app engine itself?
I'll be really really glad if someone can look into following description of my app and help me to figure out what has happened. Thank you.
I looked into this and this questions, but I haven't done those mistakes.
<script>
sendMessage = function(path, opt_param, opt_param2) {
path += '?g=' + state.game_key;
if (opt_param) {
path += '&' + opt_param;
}
if (opt_param2) {
path += '&' + opt_param2;
}
var xhr = new XMLHttpRequest();
xhr.open('POST', path, true);
xhr.send();
};
Above function is used to make a post request to the server.
onOpened = function() {
sendMessage('/resp');
console.log('channel opened');
};
Above is the function I want to be called when the channel is open for the first time. I send a post to the '/resp' address.
onMessage = function(m) {
console.log('message received');
message = JSON.parse(m.data);
//do stuff with message here
};
I want to process the response I get from that request in the above function.
following are onerror and onclose handlers.
onError = function() {
console.log('error occured');
channel = new goog.appengine.Channel('{{ token }}');
socket = channel.open();
};
onClose = function() {
console.log('channel closed');
};
channel = new goog.appengine.Channel('{{ token }}');
socket = channel.open();
socket.onopen = onOpened;
socket.onmessage = onMessage;
socket.onclose = onClose;
socket.onerror = onError;
</script>
This script is at the top of body tag. This works fine in my local development server. But on the app engine,
onOpen function is called.
I can see the request to /resp in the sever logs.
but onMessage is never called. The log 'message received' is not present in the console.
this is the server side.
token = channel.create_channel(user.user_id() + game.user1.user_id() )
url = users.create_logout_url(self.request.uri)
template_values = {
'token' : token,
'id' : pid,
'game_key' : str(game.user1.user_id()),
'url': url
}
path = os.path.join(os.path.dirname(__file__), 'game.html')
self.response.out.write(template.render(path, template_values))
and this is in the request handler for '/resp' request. My application is a multi-player card game. And I want to inform other players that a new player is connected. Even the newly connected player will also get this message.
class Responder(webapp2.RequestHandler):
def post(self):
user = users.get_current_user()
game = OmiGame.get_by_key_name(self.request.get('g'))
if game.user1:
channel.send_message(game.user1.user_id() + game.user1.user_id() , create_message('%s joined.' % user.nickname()))
if game.user2:
channel.send_message(game.user2.user_id() + game.user1.user_id() , create_message('%s joined.' % user.nickname()))
EDIT : user1 is the user who created the game. I want tokens of other players' to be created by adding the user1's user_id and the relevant users user_id. Could something go wrong here?
So when I try this on the local dev server I get these messages perfectly fine. But on the GAE onMessage is not called. This is my app. When the create button is clicked page with above script is loaded and "playernickname connected" should be displayed.
The channel behavior on the dev server and production are somewhat different. On the dev server, the channel client just polls http requests frequently. On production, comet style long polling is used.
I suspect there may be a problem with making the XHR call inside the onOpened handler. In Chrome at least, I see that the next talkgadget GET request used by the channel API is cancelled.
Try calling sendMessage('/resp') outside of the onMessage function. Perhaps enqueue it to get run by using setTimeout so it's called later after you return.

Resources