How To Clear Storage In studiolab.sagemaker.aws - amazon-sagemaker

I deleted all the files i made but still a problem comeing:- untitled.ipynb [Errno 28] No space left on device
to clear my runtime storage

Related

Google App Engine with Flask: Memorystore/redis produces [Errno 104] Connection reset by peer

My Flask-based GAE app has been running for a few weeks without issue. Today I noticed the root URL produces a 500 Internal Server Error most of the time. In the logging I see this appears to be related to session handling in Flask (using Flask-Session). Before transitioning to GAE, this app ran on a VM with local Redis instance for well over a year without any problems.
The Memorystore instance has only about 1500 keys at this time and 3 or 4 mb of data, so it is not heavily loaded. The server itself receives very little traffic (just me and the occasional robot). I am looking for insight as to what has produced this change in behavior or what diagnostic procedures I should pursue since I am new to GAE and the Google Cloud environment.
A typical traceback of the failure looks like this:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/flask/app.py", line 1969, in finalize_request response = self.process_response(response)
File "/env/lib/python3.7/site-packages/flask/app.py", line 2268, in process_response self.session_interface.save_session(self, ctx.session, response)
File "/env/lib/python3.7/site-packages/flask_session/sessions.py", line 166, in save_session time=total_seconds(app.permanent_session_lifetime))
File "/env/lib/python3.7/site-packages/redis/client.py", line 1540, in setex return self.execute_command('SETEX', name, time, value)
File "/env/lib/python3.7/site-packages/redis/client.py", line 836, in execute_command conn = self.connection or pool.get_connection(command_name, **options)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 1065, in get_connection if connection.can_read():
File "/env/lib/python3.7/site-packages/redis/connection.py", line 682, in can_read return self._parser.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 295, in can_read return self._buffer and self._buffer.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 205, in can_read raise_on_timeout=False)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 173, in _read_from_socket data = recv(self._sock, socket_read_size)
File "/env/lib/python3.7/site-packages/redis/_compat.py", line 58, in recv return sock.recv(*args, **kwargs) ConnectionResetError: [Errno 104] Connection reset by peer
Again, this is new behavior. The server worked flawlessly for a couple of weeks. What might have changed and where should I look?
Possible related issue: https://github.com/andymccurdy/redis-py/issues/1186
Using health_check_interval eliminated most, but not all of these "Connection reset by peer" errors for us (GAE Python 2.7):
self._redis = Redis(
environ.get("REDISHOST", "localhost"),
int(environ.get("REDISPORT", 6379)),
health_check_interval=30,
)
Perhaps a value lower than 30 would eliminate the remaining occurrences.

Why would file checksums inconsistently fail?

I created a ~2MiB file.
dd if=/dev/urandom of=file.bin bs=2M count=1
Then I copied that file a large number of times and generated a checksum for each (identical) copy.
for i in `seq 50000`;
do
name="file.${i}.bin"
cp file.bin "${name}"
sha512sum "${name}" > "${name}.sha512"
done
I then verified all of those checksummed files with a validation script to run sha512sum against each file.
for file in `find . -regex ".*\.sha512"`
do
sha512sum --check --quiet "${file}" || (
cat "${file}" && sha512sum "${file%.sha512}"
)
done
I just created these files, and when I validate them moments later, I see intermittent failures and inconsistencies in the data (console text truncated for readability)
will:/mnt/usb $ for file in `find ...
file.5602.bin: FAILED
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.5602.bin
b176e8e3ea63a223130f3a0 ... ./file.5602.bin
The checksum files are all identical since the source files are all identical
The problem seems to be that my computer is, seemingly at random, generating the wrong checksum for some of my files when I go to validate. A different file fails the checksum every time, and files that previously failed will pass.
will:/mnt/usb $ for file in `find ...
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.3248.bin
442a1d8805ed134c9ab5252 ... ./file.3248.bin
Keep in mind that all of these files are identical.
I see the same behavior with SATA SSD and HDD, and USB devices, with md5 and sha512, with xfs, btrfs, ext4, and vfat. I tried live booting to another OS. I see this same stranger behavior regardless. I also see rsync --checksum for these files thinks checksums are wrong and re-copies these files even though they have not changed.
What could explain this behavior? Since it's happening on multiple devices with all the scenarios I described, I doubt this is bit rot. My kernel logs show no obvious errors. I would assume this is a hardware issue based on my troubleshooting, but how can this be diagnosed? Is it the CPU, the motherboard, the RAM?
What could explain this behavior? How can this be diagnosed?
From what I've read, a number of issues could explain this behavior. Bad disk(s), bad PSU (power supply), bad RAM, filesystem issues.
I tried the following to determine what was happening. I repeated the experiment with different...
Disks
Types of disks (SDD vs HDD)
External drives (3.5 and 2.5 enclosures)
Flash drives (USB 2 and 3 on various ports)
Filesystems (ext4, vfat (fat32), xfs, btrfs)
Different PSU
Different OS (live boot)
Nothing seemed to resolve this.
Finally, I gave memtest86+ v5.0.1 a try via an Ubuntu live USB.
voila. It found bad memory. Through process of elimination I determined one of my memory sticks was bad, and then tested the other over night to ensure it was in good shape. I re-ran my experiment again and I am seeing consistent checksums on all my files.
What a subtle bug. I only noticed this bad behavior by accident. If I hadn't been messing around with file checksums, I do not think I would have found this bad RAM.
This makes me want to regularly schedule a routine in which I verify and test my RAM. A consequence of this bad memory stick is that some of my test data did end up corrupt, but more often than not, the checksum verifications were just interimmitent failures.
In one sample data pool, all the checksums start with cb2848ca0e1ff27202a309408ec76..., because all ~50,000 files are identical.
Though, there are two files that are corrupt, but this is not bit rot or file integrity damage.
What seems most likely is that these files were created with corruption because cp encountered bad RAM when I created these files. Those files consistently return bad checksums of 58fe24f0e00229e8399dc6668b9... and bd85b51065ce5ec31ad7ebf3..., while the other 49,998 files return the same checksum.
This has been a fun extremely frustrating experiment in debugging.

Google Compute Engine VM instance: VFS: Unable to mount root fs on unknown-block

My instance on Google Compute Engine is not booting up due to having some boot order issues.
So, I have created a another instance and re-configured my machine.
My questions:
How can I handle these issues when I host some websites?
How can I recover my data from old disk?
logs
[ 0.348577] Key type trusted registered
[ 0.349232] Key type encrypted registered
[ 0.349769] AppArmor: AppArmor sha1 policy hashing enabled
[ 0.350351] ima: No TPM chip found, activating TPM-bypass!
[ 0.351070] evm: HMAC attrs: 0x1
[ 0.351549] Magic number: 11:333:138
[ 0.352077] block ram3: hash matches
[ 0.352550] rtc_cmos 00:00: setting system clock to 2015-12-19 17:06:53 UTC (1450544813)
[ 0.353492] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[ 0.354108] EDD information not available.
[ 0.536267] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2
[ 0.537862] md: Waiting for all devices to be available before autodetect
[ 0.538979] md: If you don't use raid, use raid=noautodetect
[ 0.539969] md: Autodetecting RAID arrays.
[ 0.540699] md: Scanned 0 and added 0 devices.
[ 0.541565] md: autorun ...
[ 0.542093] md: ... autorun DONE.
[ 0.542723] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[ 0.543731] Please append a correct "root=" boot option; here are the available partitions:
[ 0.545011] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.546199] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-39-generic #44~14.04.1-Ubuntu
[ 0.547579] Hardware name: Google Google, BIOS Google 01/01/2011
[ 0.548728] ffffea00008ae140 ffff880024ee7db8 ffffffff817af92b 000000000000111e
[ 0.549004] ffffffff81a7c7c8 ffff880024ee7e38 ffffffff817a976b ffff880024ee7dd8
[ 0.549004] ffffffff00000010 ffff880024ee7e48 ffff880024ee7de8 ffff880024ee7e38
[ 0.549004] Call Trace:
[ 0.549004] [] dump_stack+0x45/0x57
[ 0.549004] [] panic+0xc1/0x1f5
[ 0.549004] [] mount_block_root+0x210/0x2a9
[ 0.549004] [] mount_root+0x54/0x58
[ 0.549004] [] prepare_namespace+0x16d/0x1a6
[ 0.549004] [] kernel_init_freeable+0x1f6/0x20b
[ 0.549004] [] ? initcall_blacklist+0xc0/0xc0
[ 0.549004] [] ? rest_init+0x80/0x80
[ 0.549004] [] kernel_init+0xe/0xf0
[ 0.549004] [] ret_from_fork+0x58/0x90
[ 0.549004] [] ? rest_init+0x80/0x80
[ 0.549004] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 0.549004] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
What Causes This?
That is the million dollar question. After inspecting my GCE VM, I found out there were 14 different kernels installed taking up several hundred MB's of space. Most of the kernels didn't have a corresponding initrd.img file, and were therefore not bootable (including 3.19.0-39-generic).
I certainly never went around trying to install random kernels, and once removed, they no longer appear as available upgrades, so I'm not sure what happened. Seriously, what happened?
Edit: New response from Google Cloud Support.
I received another disconcerting response. This may explain the additional, errant kernels.
"On rare occasions, a VM needs to be migrated from one physical host to another. In such case, a kernel upgrade and security patches might be applied by Google."
1. "How can I handle these issues when I host some websites?"
My first instinct is to recommend using AWS instead of GCE. However, GCE is less expensive. Before doing any upgrades, make sure you take a snapshot, and try rebooting the server to see if the upgrades broke anything.
2. How can I recover my data from old disk?
Even Better - How to recover your instance...
After several back-and-forth emails, I finally received a response from support that allowed me to resolve the issue. Be mindful, you will have to change things to match your unique VM.
Take a snapshot of the disk first in case we need to roll back any of the changes below.
Edit the properties of the broken instance to disable this option: "Delete boot disk when instance is deleted"
Delete the broken instance.
IMPORTANT: ensure not to select the option to delete the boot disk. Otherwise, the disk will get removed permanently!!
Start up a new temporary instance.
Attach the broken disk (this will appear as /dev/sdb1) to the temporary instance
When the temporary instance is booted up, do the following:
In the temporary instance:
# Run fsck to fix any disk corruption issues
$ sudo fsck.ext4 -a /dev/sdb1
# Mount the disk from the broken vm
$ sudo mkdir /mnt/sdb
$ sudo mount /dev/sdb1 /mnt/sdb/ -t ext4
# Find out the UUID of the broken disk. In this case, the uuid of sdb1 is d9cae47b-328f-482a-a202-d0ba41926661
$ ls -alt /dev/disk/by-uuid/
lrwxrwxrwx. 1 root root 10 Jan 6 07:43 d9cae47b-328f-482a-a202-d0ba41926661 -> ../../sdb1
lrwxrwxrwx. 1 root root 10 Jan 6 05:39 a8cf6ab7-92fb-42c6-b95f-d437f94aaf98 -> ../../sda1
# Update the UUID in grub.cfg (if necessary)
$ sudo vim /mnt/sdb/boot/grub/grub.cfg
Note: This ^^^ is where I deviated from the support instructions.
Instead of modifying all the boot entries to set root=UUID=[uuid character string], I looked for all the entries that set root=/dev/sda1 and deleted them. I also deleted every entry that didn't set an initrd.img file. The top boot entry with correct parameters in my case ended up being 3.19.0-31-generic. But yours may be different.
# Flush all changes to disk
$ sudo sync
# Shut down the temporary instance
$ sudo shutdown -h now
Finally, detach the HDD from the temporary instance, and create a new instance based off of the fixed disk. It will hopefully boot.
Assuming it does boot, you have a lot of work to do. If you have half as many unused kernels as me, then you might want to purge the unused ones (especially since some are likely missing a corresponding initrd.img file).
I used the second answer (the terminal-based one) in this askubuntu question to purge the other kernels.
Note: Make sure you don't purge the kernel you booted in with!
How to handle these issues when I host some websites?
I'm not sure how you got into this situation, but it would be nice to have additional information (see my comment above) to be able to understand what triggered this issue.
How to recover my data from old disk?
Attach and mount the disk
Assuming you did not delete the original disk when you deleted the instance, you can simply mount this disk from another VM to read the data from it. To do this:
attach the disk to another VM instance, e.g.,
gcloud compute instances attach-disk $INSTANCE --disk $DISK
mount the disk:
sudo mkdir -p /mnt/disks/[MNT_DIR]
sudo mount [OPTIONS] /dev/disk/by-id/google-[DISK_NAME] /mnt/disks/[MNT_DIR]
Note: you'll need to substitute appropriate values for:
MNT_DIR: directory
OPTIONS: options appropriate for your disk and filesystem
DISK_NAME: the id of the disk after you attach it to the VM
Unmounting and detaching the disk
When you are done using the disk, reverse the steps:
Note: Before you detach a non-root disk, unmount the disk first. Detaching a mounted disk might result in incomplete I/O operation and data corruption.
unmount the disk
sudo umount /dev/disk/by-id/google-[DISK_NAME]
detach the disk from the VM:
gcloud compute instances detach-disk $INSTANCE --device-name my-new-device
In my case grub's (/boot/grub/grub.cfg) first menuentry (3.19.0-51-generic) was missing an initrd entry and was unable to boot.
Upon further investigating, looking at dpkg for the specific kernel its marked as failed and unconfigured
dpkg -l | grep 3.19.0-51-generic
iF linux-image-3.19.0-51-generic 3.19.0-51.58~14.04.1
iU linux-image-extra-3.19.0-51-generic 3.19.0-51.58~14.04.1
This all stemmed from the Ubuntu image supplied by Google having unattended-upgrades enabled. For some reason the initrd was killed when it was being built and something else came along and ran update-grub2.
unattended-upgrades-dpkg_2016-03-10_06:49:42.550403.log:update-initramfs: Generating /boot/initrd.img-3.19.0-51-generic
Killed
E: mkinitramfs failure cpio 141 xz -8 --check=crc32 137
unattended-upgrades-dpkg_2016-03-10_06:49:42.550403.log:update-initramfs: failed for /boot/initrd.img-3.19.0-51-generic with 1.
To work around the immediate problem run.
dpkg --force-confold --configure -a
Although unattended-upgrades in theory is a great idea, having it enabled by default can have unattended consequences.
There are a few cases where the kernel fails to handle the initrdless boot. Disable the GRUB_FORCE_PARTUUID options so that it boots with initrd.

Undocumented Managed VM task queue RPCFailedError

I'm running into a very peculiar and undocumented issue with a GAE Managed VM and Task Queues. I understand that the Managed VM service is in beta, so this question may not be relevant forever, but it's definitely causing me lots of headache now.
The main symptom of the issue is that, in certain (not completely known to me) circumstances, I'm seeing the following error/traceback:
File "/home/vmagent/my_app/some_file.py", line 265, in some_ndb_tasklet
res = yield some_task.add_async('some-task-queue-name')
File "/home/vmagent/python_vm_runtime/google/appengine/ext/ndb/tasklets.py", line 472, in _on_rpc_completion
result = rpc.get_result()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/vmagent/python_vm_runtime/google/appengine/api/taskqueue/taskqueue.py", line 1948, in ResultHook
rpc.check_success()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmstub.py", line 312, in _WaitImpl
raise self._ErrorException(*_DEFAULT_EXCEPTION)
RPCFailedError: The remote RPC to the application server failed for call taskqueue.BulkAdd().
I've gone through my local App Engine SDK to trace this through, and I can get up to the last line of the trace, but google/appengine/ext/vmruntime/ doesn't exist on my machine at all, so I have no idea what's happening in vmstub.py. From looking at the local code, some_task.add_async('the-queue') is spinning up an RPC and waiting for it to finish, but this error is not what the except apiproxy_errors.ApplicationError, e: at line 1949 of taskqueue.py is expecting...
The code that's generating the error looks something like this:
#ndb.tasklet
def kickoff_tasks(batch_of_payloads):
for task_payload in batch_of_payloads:
# task_payload is a dict
task = taskqueue.Task(
url='/the/handler/url',
params=payload)
res = yield task.add_async('some-valid-task-queue-name')
Other things worth noting:
this code itself is running in a task handler kicked off by another task.
I first saw this error before implementing this sort of batching, and assumed the issue was because I had added too many tasks from within a task handler.
In some cases, I can run this successfully with a batch size of 100, but in others, it fails consistently (depending on the data in the payloads) at 100, and sometimes succeeds at batch sizes of 50.
The task payloads themselves include batches of items, and are tuned to be just small enough to fit in a task. App Engine advertises a maximum task size of 100KB, so I'm keeping the payloads to under 90,000 bytes right now. Lowering the size even more doesn't seem to help any.
I've also tried implementing an exponential backoff to retry the kickoff_tasks method when this error appears, but it seems that once the error is raised, I can't add any other tasks at all from within the same handler (i.e. I can't kickoff a "continue from where you left off" task, I just have to let this one fail and restart itself)
So, my question is, what is actually causing this error? How can I avoid it, or fix this so that I'm handling it correctly?
This is a known issue that is being worked on. There are actually two issues - the RPC failure itself and the lack of handling of the RPCFailedError exception by the SDK.
There is some public discussion of the issue here.
If you're using App Engine Flexible and the python-compat-multicore image, a new bug popped up related to App Engine using a newer version of the requests library that broke the communication between App Engine Flexible and the datastore. You can fix this error by monkey patching the library in your appengine_config.py file.
Add the following code to appengine_config.py:
try:
import appengine.ext.vmruntime.vmstub as vmstub
except ImportError:
pass
else:
if isinstance(vmstub.DEFAULT_TIMEOUT, (int, long)):
# Newer requests libraries do not accept integers as header values.
# Be sure to convert the header value before sending.
# See Support Case ID 11235929.
vmstub.DEFAULT_TIMEOUT = bytes(vmstub.DEFAULT_TIMEOUT)
Note that if you do not have an appengine_config.py file, you can just create it in your base project directory (wherever you put your app.yaml file). This file gets run during App Engine startup..

OpenSearchServer: Why am I getting this error Error (java.lang.NullPointerException)

I am using OpenSearchServer v1.2.4 rc3.
In the first few days it's working fine.
But when its Index size reached 1.0GB I got this error
"Error (java.lang.NullPointerException)"
when I start my crawler. The crawler works fine for some time and then stops with this error
"Error (java.lang.NullPointerException)".
What's wrong?
Depending of the size of your index, a memory parameter must be added. By default, OpenSearchServer is setup to run on small server with the default RAM value provided by the Java Virtual Machine (from 64MB to 512MB only).
For large indexes, you must set up a higher value. On a Unix/Linux server, just create an /etc/opensearchserver file with the following content:
CATALINA_OPTS="-Xms2G -Xmx2G -server"
export CATALINA_OPTS
On a Windows server, edit the start.bat files. Add the following line just after :okExec
set CATALINA_OPTS="-Xms2G -Xmx2G -server"
Replace 2G (which mean 2 GB) by the size of the memory you want to allocate to OpenSearchServer.
In a 32 bits version, the memory is limited to 2.5GB. You can use more memory with a 64 bits operating system using the following lines (on Unix/Linux):
CATALINA_OPTS="-Xms12G -Xmx12G -d64 -server"
for Window 64bits:
set CATALINA_OPTS="-Xms12G -Xmx12G -d64 -server"
After restarting OpenSearchServer, just check in the Runtime tab panel that you have the correct size of memory available.
Regarding the error details, it is more useful to have the full stack trace. You can find it in the log file (data/logs/oss.log), or in the Runtime/Logs tab panel.

Resources