Artisan::call UP wont affect via URL - call

How would you solve the following problem?
When I go to https://myweb.local/artisan/site-down my site is sent to maintenance mode, which is to be expected. My route looks like this:
Route::get('/artisan/site-up', function() {
$exitCode = Artisan::call('up');
return redirect()->back();
});
Route::get('/artisan/site-down', function() {
$exitCode = Artisan::call('down');
return redirect()->back();
});
If I now want to wake up the page from maintenance mode (via URL), i.e. call https://myweb.local/artisan/site-up, then the first route is not processed, which makes sense. Now, on my local machine I can just fetch the page from maintenance mode via command line (php artisan up) but how can I do that on the remote server if I don't have SSH access?
Alternatively I could delete the "down" file via FTP from the "storrage" folder and the site would be fetched from sleep, but that is not a nice method.
Any suggestions?

Alternatively, I could use the following method to solve this problem by attaching a Secret:
Route::get('/artisan/site-down', function() {
$exitCode = Artisan::call('down --secret="123456789"');
return redirect()->back();
});
Then call https://myweb.local/123456789 and then finally call https://myweb.local/artisan/site-up to wake up the site from maintenance mode.
I'm still interested in other possible approaches though?

Related

Request Deferrer with Service Worker in PWA

I am making a PWA where users can answer the forms. I want it to make also offline, so when a user fills out a form and does not have the internet connection, the reply will be uploaded when he is back online. For this, I want to catch the requests and send them when online. I wanted to base it on the following tutorial:
https://serviceworke.rs/request-deferrer_service-worker_doc.html
I have managed to implement the localStorage and ServiceWorker, but it seems the post messages are not caught correctly.
Here is the core function:
function tryOrFallback(fakeResponse) {
// Return a handler that...
return function(req, res) {
// If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return;
// return enqueue(req).then(function() {
// // As the fake response will be reused but Response objects
// // are one use only, we need to clone it each time we use it.
// return fakeResponse.clone();
// });
}
console.log("LET'S FLUSH");
// If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
I use it with:
worker.post("mypath/add", tryOrFallback(new Response(null, {
status: 212,
body: JSON.stringify({
message: "HELLO"
}),
})));
The path is correct. It detects when the actual post event happens. However, I can't access the actual request (the one displayed in try or fallback "req" is basically empty) and the response, when displayed, has the custom status, but does not contain the message (the body is empty). So somehow I can detect when the POST is happening, but I can't get the actual message.
How to fix it?
Thank you in advance,
Grzegorz
Regarding your sample code, the way you're constructing your new Response is incorrect; you're supplying null for the response body. If you change it to the following, you're more likely to see what you're expecting:
new Response(JSON.stringify({message: "HELLO"}), {
status: 212,
});
But, for the use case you describe, I think the best solution would be to use the Background Sync API inside of your service worker. It will automatically take care of retrying your failed POST periodically.
Background Sync is currently only available in Chrome, so if you're concerned about that, or if you would prefer not to write all the code for it by hand, you could use the background sync library provided as part of the Workbox project. It will automatically fall back to explicit retries whenever the real Background Sync API isn't available.

Webdriver.IO not able to download file continuously using Webdriver.io

I'm using Webdriver.io to download a file continuously
I tried the following code:
var webdriverio = require('webdriverio');
var options = {
desiredCapabilities: {
browserName: 'chrome'
// waitforTimeout: 1000000
}
};
webdriverio
.remote(options)
.init()
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
.pause(1000)
.setValue('#password’,’12345’)
.click('#login-submit')
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.pause(20000)
.getUrl().then(function(url){
console.log('URL: ' + url);
})
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.click("a[href='/wiki/admin'] button.iwdh")
.getUrl().then(function (url) {
console.log('URL after settings ' + url);
})
.pause(3000)
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
//.pause(400000)
.end();
Note: The file size is 7GB and how long it will take to download is depend upon the network so instead of using pause() and timeout() is there any way to do it using webdriver.io or node.js ?
To begin with, your current task (waiting for a HUUUUGE file to download) is not a common use-case when it comes to Webdriver-based automation frameworks, WebdriverIO included. Such frameworks aren't meant to download massive files.
First off, you're confusing the waitforTimeout value with WebdriverIO test timeout. Your test is timing out before the .pause() ends.
Currently you're running your tests via the WebdriverIO test-runner. If you want to increase the test timeout, you have to use a different test framework (Mocha, Jasmine, or Cucumber) and set its timeout value to w/e you find appropriate. Going on, I recommend you use Mocha (coming from an ex-Cucumber guy).
You will have to install Mocha: npm install --save-dev wdio-mocha-framework and run your tests with it. Your test should look like this afterwards:
describe("Your Testsuite", function() {
it("\nYour Testcase\n", function() {
return browser
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
// rest of the steps
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
)}
)}
Your config (wdio.conf.js) should contain the following:
framework: 'mocha',
mochaOpts: {
ui: 'bdd',
timeout: 99999999
}
As a side-note, I tried waiting a very long time (> 30 mins) using the above config and had no issues what-so-ever.
Let me know if this helps. Cheers!
If you click on a download button in your browser and you close your browser then your download will be also closed. If you are owning the website where you click on the download button then try to rewrite your code that you have a download able url. Then you can search for a module or way to download files from http url. If you are not the owner and you cant find a url in the href then you can maybe get the generated download url from the network section at your inspector.
Also I never heard that a browser gets closed after timeout? Maybe it comes from webdriver.io I never let my chrome so long open with webdriver.io
You can try to make a workaround use Intervall each 1 Minute as example and then use a webdriver.io command to don´t timeout.
I know it's very old question but I wanted to answer question from comment (and have no such possibility yet). But I will answer main question too.
When i am giving timeout in "wdio.conf.js" file it's not able to
downlaod file it's closing the session but by giving .pause(2000000)
in webdriver.io code it's able to download file of 7GB. What is the
use of timeout in "wdio.conf.js" file if it's kicking out the session
without downlaod?
So this timeout is related to elements state during the test run. So it "determines how long the instance should wait for that element to reach the state".
https://webdriver.io/docs/timeouts.html - this can help. But to answer the question too:
There are more many timeouts such test deals with. Like iamdanchiv wrote for this you should try using one of automatically supported frameworks like Mocha or Jasmine.
IMO right now the easiest way would to do the quick fresh setup using CLI provided by WDIO:
https://webdriver.io/docs/gettingstarted.html
Where you can just simply pick the additional framework you want to use. I would suggest using Jasmine and Chromedriver for this. Than in your wdio.conf.js you can change this part:
waitforTimeout: 10000,
jasmineNodeOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
},
To something that works for you. Or you can use boilerplate projects from wdio page like this one:
https://webdriver.io/docs/boilerplate.html
But that's not all! Still you will have to create some method or function that checks for the file. So check where do you download the file or make it download where you want to and then create a method that uses some kind of wait:
https://webdriver.io/docs/api/browser/waitUntil.html
browser.waitUntil(condition, { timeout, timeoutMsg, interval })
So you can set the timeout either here or in wdio.conf in 'waitforTimeout'. Inside this method condition you can use node filesystem (https://nodejs.org/api/fs.html) to check the state of the file.
This can be helpful to get through waiting for file condition:
https://blog.kevinlamping.com/downloading-files-using-webdriverio/

Angularjs 1: delete an item, but item still exist until refresh

Here is my simple angular 1 app.
Source code here.
Basically it is a copy of this.
I am able to do CRUD operations. The issue is that, when I delete a record. It redirects back to the home page. The record I deleted is still here. If I refresh the page, it is gone.
Is it a way to delete a record and then after redirect, I should see the latest list?
Update 1:
Unfortunately, it is still unresolved. Something strange that it seems the promise in resolve is cached. I added a few console.log inside the code. You can see the code flow. Open chrome developer tool to see it.
i review you code , the problem is here:
this.deleteContact = function(contactId) {
var url = backend_server + "/contacts/" + contactId;
// actually http delete
return $http.delete(url)
.then(function(response) {
return response;
}, function(response) {
alert("Error deleting this contact.");
console.log(response);
});
}
if you have service to manage your contact use there to call your server to delete the contact.
the reason you cannot delete without refresh is:
your delete from DB but not from angular array.
must review (update the scope (array))
your code is hard to read , i have suggestion for you, using:
broserfy , watchify
lodash
and backen use mvc
You delete it remotely but not locally. So you can see result only after refreshing (only after you requesting updated data from server). You need to update your local contacts after you succeed on server side.
$scope.deleteContact = function(contactId) {
Contacts.deleteContact(contactId).then(function(data){
...
//DELETE YOU LOCAL CONTACT HERE
...
$location.path("/");
});
}
I didn't look deeply into your code, so I can't say how exactly you should do it, but as I see you keep your local contacts in $scope.contacts in your ListController.

angular $resource works with IIS express but not IIS ? I think something is wrong with the PUT?

I have isolated the problem down to a few lines. With IIS express it calls the PUT on the web API. When I switch to using IIS with the same code the call to the PUT method never happens.. The GET call works with both just fine.. any idea?
$scope.save = function (msa) {
$scope.msa = msa;
var id = this.msa.PlaceId;
Msa.update({ id: id }, $scope.msa, function () {
alert('finished update'); //only gets here with iis express
$scope.updatedItems.push(id);
$location.path('/');
});
}
MsaApp.factory('Msa', function ($resource) {
return $resource('/api/Place/:id', { id: '#id' }, { update: { method: 'PUT' } });
});
EDIT 1:
I thought it was working but now it only works when 'localhost' and not the computer name.. it is not calling the server method.. any ideas what things to look out for that make the site act differently from localhost to ? .. and even stranger.. the angular site wont load in IE.. but it loads in chrome
EDIT 2:
I think I have the answer.. The dewfault webapi PUT/UPDATE creates invalid code.. It sort of randomly would breaking at db.Entry(place).State = EntityState.Modified... I found code here that seems to fix it so far.. not exactly sure what it does though
An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key
Remove WebDAV module from IIS, it should work
IIS does block some of the actions by default, I believe PUT is one (DELETE is another).
See: https://stackoverflow.com/a/12443578/1873485
Go to Handler Mappings in your IIS Manager. Find ExtensionlessUrlHandler-Integrated-4.0, double click it. Click Request Restrictions... button and on Verbs tab, add both DELETE and PUT.

AngularJS HTML5 mode degrade to full page reloads in lieu of hashbang

By enabling HTML5 mode in AngularJS, the $location service will rewrite URLs to remove the hashbang from them. This is a great feature that will help me with my application, but there is a problem with its fallback to hashbang mode. My service requires authentication, and I am forced to use an external authentication mechanism from my application. If a user attempts to go to a URL for my app with a hashbang in it, it will first redirect them to the authentication page (won't ever touch my service unless successfully authenticated), and then redirect them back to my application. Being that the hash tag is only seen from the client side, it will drop off whatever parts of the routes come after by the time they hit my server. Once they are authenticated, they may re-enter the URL and it will work, but its that one initial time that will cause a disruption to the user experience.
My question is then, is there any way to go from $location.html5Mode(true) to the fallback of full page reloads for un-supportive browsers, skipping the hashbang method of routing entirely in AngularJS?
The best comparison of available implementations of what I'm aiming for would be something such as browsing around folders on github.com. If the browser supports rewriting the URL without initiating a page refresh, the page will asynchronously load the necessary parts. If the browser does not support it, when a user clicks on a folder, a full-page refresh occurs. Can this be achieved with AngularJS in lieu of using the hashbang mode?
DON'T overwrite the core functionality.
Use Modernizr, do feature detection, and then proceed accordingly.
check for history API support
if (Modernizr.history) {
// history management works!
} else {
// no history support :(
// fall back to a scripted solution like History.js
}
Try to wrap $location and $routeProvider configuration in browser's HTML5 History API checking, like this:
if (isBrowserSupportsHistoryAPI()) {
$location.html5Mode(true)
$routeProvider.when(...);
}
Also may be you need to create a wrapper to $location if you use it to change path.
(Sorry for terrible english)
Why not handle the un-authenticated redirect on the client side for this situation? I'd need to know a bit more about exactly how your app works to give you a more specific solution but essentially something like:
User goes to a route handled by AngularJS, server serves up the AngularJS main template and javascript
User is not authenticated, AngularJS detects this and redirects to the authentication page
You could have something in the module's run block for when the AngularJS application starts:
module('app',[])
.configure(...yadda...yadda...yadda...)
.run(['$location', 'authenticationService', function($location, auth) {
if (!auth.isAuthenticated()) {
$location.url(authenticationUrl)
}
});
I've subbed in a service which would find out if you were authenticated somehow, up to you how, could be checking a session cookie, could be hitting your API to ask. Really depends on how you want to continue to check authentication as the client application runs.
You can try and override the functionality of the $location service. The general idea would be to rewrite the URL according to whether someone is already authenticated or not, or just use a single approach (without hashbangs) for all URLs, regardless to whether html5mode is on or not.
I'm not sure that I fully understand the use-case so I can't write the exact code that you need. Here is a sample implementation of how to overrides/implements and registers the $location service, just making sure that hashbang is always eliminated:
app.service('$location', [function() {
var DEFAULT_PORTS = {
ftp: 21,
http: 80,
https: 443
};
angular.extend(this, {
absUrl: function() {
return location.href;
},
hash: function(hash) {
return location.hash.substr(1);
},
host: function() {
return location.host;
},
path: function(path) {
if (!path) {
return location.pathname;
}
location.pathname = path;
return this;
},
port: function() {
return location.port ? Number(location.port) : DEFAULT_PORTS[this.protocol()] || null;
},
protocol: function() {
return location.protocol.substr(0, location.protocol.length - 1);
},
replace: function() {
return this;
},
search: function(search, paramValue) {
if (search || paramValue) {
return this;
}
var query = {};
location.search.substr(1).split("&").forEach(function(pair) {
pair = pair.split("="); query[pair[0]] = decodeURIComponent(pair[1]);
});
return query;
},
url: function(url, replace) {
return this.path();
}
});
}]);

Resources