google search console indexes a few URLs that I haven't created. These routes never existed and the following checkups have been done:
robot.txt works well;
No hackers messed with the website;
browsing results in the website's 404 error.
I'm using the console's removal tool but I'm searching for a better solution that needs less manual handling of things.
Thanks for the answears.
By index I presume they are reported under coverage as excluded?
If they 404 and are excluded, you don't need to use the removal tool. They are effectively removed already.
Google is just reporting that for some reason they found the URL, and then are telling you they know it should not be indexed.
They most likely found the URL due to a link to it. Maybe a typo for a link, or even a link from years ago. If it looks like a mistake you could try and fix the link and/or set up a 301 redirect from the bad URL to the right one.
Otherwise, it is fine to ignore reported 404s.
Related
I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?
I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all
Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)
How to make browse page load ? I have added handler as given in the page
https://wiki.apache.org/solr/VelocityResponseWriter
Still not working. Can any one brief me on this. Thanks in advance.
Couple of things to check:
Have you restarted Solr?
Is the core you are trying to 'browse' a default core? If not, you need to include the core name in the URL. E.g. /solr/collection1/browse
Are your library statements in solrconfig.xml pointing at the right velocity jar? Use absolute path unless you are very sure that you know what your base directory is for the relative paths
Are you getting any errors in the server logs?
If all fails, start comparing what you have with the collection1 example in Solr distribution. it works there, so you can compare nearly line-by-line the relevant entries and even experiment with collection1 to make it more like your failing example.
I currently have a website i'm working on that I have taken over from another individual, I dumped his SQL file into my database and everything seems to be ok apart from one thing. Whenever I try to log in to the back end or if I try to go elsewhere, it will add an additional .co.uk to the address bar, making it like so:
From: www.domain.co.uk to www.domain.co.uk.co.uk
I've had a dig in the database but I really can't find anything and i've never faced this issue before, could anyone shed some light on this for me? Maybe just let me know where I could look within the database to identify the problem, many thanks.
Take a look at the .htaccess file in the root folder, which is hidden and may contain rewrite rules.
Also, I recommend you use this plugin for migrations:
http://wordpress.org/extend/plugins/wp-migrate-db/
I use it whenever I move from localhost to a live site and vice versa. It will also ensure your widgets are preserved, since doing a find replace will cause the object serialisation syntax WordPress uses to break.
After migrating, you need to visit Settings > Permalinks so the .htaccess file can be updated according to the new URL for rewrites.
I'm in the process of standing up a new CakePHP project with some very simple boilerplate code. In the process of helping a co-working install the code, I realized that if my debug value is 0, I get a 404 error (just loading the homepage):
Error: The requested address '/' was not found on this server.
If I flip the debug value to 1 or 2, the error goes away and the default homepage (I don't have any custom layout/page created yet) loads happily. This isn't an ajax request and there's nothing fancy going on here. Anyone seen this before? Haven't found anything via Google that matches what I'm seeing.
Thanks.
UPDATE
And, just in case anyone is thinking the obvious, my homepage (/) route is configured. Like most everything else, my routes.php file hasn't been modified yet.
Oh, my. Talk about a punitive headslap moment: https://stackoverflow.com/a/3803076/1665
What a long week. I'll mark this answered as soon as the time limit expires. Sheesh...
Try deleting tmp files located at /app/tmp/cache/.
I found solution to similar problem here:
http://cakephp.1045679.n5.nabble.com/Re-Not-found-error-with-DEBUG-0-td1257613.html
I set up an older Rails 2 project on a brand new Apache#Debian#squeeze. The project itself could be a single pager, using links to scroll page up and down. My links look like that:
http://mydomain.com/en/#home
These links do fine as long as JavaScript intercepts the click event and simply scrolls to the intended section. In case the user leaves the single page and opens one where these links (still the same) cannot be followed via JavaScript, I only receive an:
Forbidden
You don't have permission to access /en/ on this server.
If I change the link to:
http://mydomain.com/en#home
everything works fine and as expected. But I do not want to change my link structure. It already worked well at an older Debian5 box.
I expect that to be an Apache2 configuration issue, but do not find anything useful in the net.
Looking forward to any kind of enlightenment.
Thx
Felix
I don't know how or where you are working with javascript related to this problem, but let me tell you this.
Everything after the hashtag # is never passed to the server. Its HTTP standardization, it is just not passed to the server.
It is only intended to navigate to anchor within the webpage, and today used for a lot of new techniques including, but not limited to, xss scripting, javascript hooks, etc
It is possible that links are prohibited to load with an onclick event and some javascript does something instead, but it is not possible that you end up on this page http://mydomain.com/en/#home if http://mydomain.com/en/ does not work.
However to solve your problem you probably have to adjust your your apache rewriting rule (or enable mod_rewrite at all?) to also capture links with trailing slashes.
The link http://mydomain.com/en/ http://mydomain.com/en is something different and could serve a completely different page.
I would strongly recommend not to get a mess here and do a strict permanent redirect from one to the other. Which you choose for primary usage is up to you.
I prefer a trailing slash and can also supply arguments for that, but they can be invalidated easily and replaced by some to suggest the opposite. You should find plenty on discussion on that if you search for trailing slash here.
To solve your problem please try to find the according RewriteRule, copy it and add it one more time with a trailing slash. See whether it works and make a redirect to the url without trailign slash.
You may also edit your answer and post your server config to get help with that.