Can anyone tell why my google play service keeps crashing - adb

Here is logcat of the PID that crashes everytime. how can i fix this
C:\Users\Acer\Desktop\platform-tools\hasan.txt (30 hits)
Line 6730: 10-04 22:02:24.896 2074 2090 I ActivityManager: Start proc 12340:com.google.android.gms.persistent/u0a33 for content provider com.google.android.gms/.fonts.provider.FontsProvider caller=com.android.vending
Line 6734: 10-04 22:02:24.941 12340 12340 E .gms.persisten: Not starting debugger since process cannot load the jdwp agent.
Line 6752: 10-04 22:02:25.062 12340 12340 I .gms.persisten: The ClassLoaderContext is a special shared library.
Line 6753: 10-04 22:02:25.064 12340 12340 I chatty : uid=10033(com.google.android.gms) identical 1 line
Line 6754: 10-04 22:02:25.067 12340 12340 I .gms.persisten: The ClassLoaderContext is a special shared library.
Line 6755: 10-04 22:02:25.082 12340 12340 W .gms.persisten: JIT profile information will not be recorded: profile file does not exits.
Line 6756: 10-04 22:02:25.082 12340 12340 W .gms.persisten: JIT profile information will not be recorded: profile file does not exits.
Line 6757: 10-04 22:02:25.092 12340 12340 I Perf : Connecting to perf service.
Line 6776: 10-04 22:02:25.208 12340 12340 I Safeboot: Checking safeboot...
Line 6782: 10-04 22:02:25.219 12340 12340 I FixerFramework: Installing ProviderInstaller.
Line 6785: 10-04 22:02:25.227 12340 12340 F libc : Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7d79857000 in tid 12340 (.gms.persistent), pid 12340 (.gms.persistent)
Line 6802: 10-04 22:02:25.470 1116 1116 I /system/bin/tombstoned: received crash request for pid 12340
Line 6803: 10-04 22:02:25.471 12382 12382 I crash_dump64: performing dump of process 12340 (target tid = 12340)
Line 6812: 10-04 22:02:25.492 12382 12382 F DEBUG : pid: 12340, tid: 12340, name: .gms.persistent >>> com.google.android.gms.persistent <<<
Line 6885: 10-04 22:02:27.025 1000 1000 I Zygote : Process 12340 exited due to signal (11)
Line 6886: 10-04 22:02:27.032 2074 2085 I ActivityManager: Process com.google.android.gms.persistent (pid 12340) has died: fore BFGS

Related

why open file descriptors are not getting reused instead they are increasing in number value

I have a simple C HTTP server. I close file descriptors for disk files and new connection fds returned by accept(...), but I noticed that I am getting new file descriptor numbers that are bigger than the previous numbers: for example file descriptor from accept return starts with 4, then 5, then 4 again and so on until file descriptor reaches max open file descriptor on a system.
I have set the value to 10,000 on my system but I am not sure why exactly file descriptor number jumps to max value. And I am kind of sure than my program is closing the file descriptors.
So I would like to know if there are not thousands of connections then how come file descriptor new number are increasing periodically: in around 24 hours I get message accept: too many open files. What is this message?
Also, does ulimit -n number value get reset automatically without system reboot?
as mentioned in the answer. The output of _2$ ps aux | grep lh is
dr-x------ 2 fawad fawad 0 Oct 11 11:15 .
dr-xr-xr-x 9 fawad fawad 0 Oct 11 11:15 ..
lrwx------ 1 fawad fawad 64 Oct 11 11:15 0 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 1 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 2 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:25 255 -> /dev/pts/3
and the output of ls -la /proc/$$/fd is
root 49855 0.5 5.4 4930756 322328 ? Sl Oct09 15:58 /usr/share/atom/atom --executed-from=/home/fawad/Desktop/C++-work/lhparse --pid=49844 --no-sandbox
root 80901 0.0 0.0 25360 5952 pts/4 S+ 09:32 0:00 sudo ./lh
root 80902 0.0 0.0 1100852 2812 pts/4 S+ 09:32 0:00 ./lh
fawad 83419 0.0 0.0 19976 916 pts/3 S+ 11:27 0:00 grep --color=auto lh
I like to know what is pts/4 etc. column. is this the file descriptor number.
It's likely that the socket that is represented by the file descriptor is in close_wait or time_wait state. Which means the TCP stack holds the fd open for a bit longer. So you won't be able to reuse it immediately in this instance.
Once the socket is fully finished with and closed, the file descriptor number will then available for reuse inside your program.
See: https://en.m.wikipedia.org/wiki/Transmission_Control_Protocol
Protocol Operation and specifically Wait States.
To see what files are still open you can run
ls -la /proc/$$/fd
The output of this will also be of help.
ss -tan | head -5
LISTEN 0 511 *:80 *:*
SYN-RECV 0 0 192.0.2.145:80 203.0.113.5:35449
SYN-RECV 0 0 192.0.2.145:80 203.0.113.27:53599
ESTAB 0 0 192.0.2.145:80 203.0.113.27:33605
TIME-WAIT 0 0 192.0.2.145:80 203.0.113.47:50685

Using Orbbec Embedded S camera from ARM with OpenNI

I have an ARM SoC that I've connected an Embedded S camera to. I can see the camera is connected:
$ lsusb
Bus 001 Device 006: ID 2bc5:050b
Bus 001 Device 007: ID 2bc5:060b
I downloaded OpenNI_2.3.0.63.zip from https://orbbec3d.com/develop/ then copied the OpenNI-Linux-Arm64-2.3.0.63 directory to my device and ran install.sh. Now when I plug in the camera I get:
[ 5887.390778] hub 1-1:1.0: 2 ports detected
[ 5887.879656] usb 1-1.1: New USB device found, idVendor=2bc5, idProduct=050b
[ 5887.886538] usb 1-1.1: New USB device strings: Mfr=2, Product=1, SerialNumber=3
[ 5887.894193] usb 1-1.1: Product: USB 2.0 Camera
[ 5887.898757] usb 1-1.1: Manufacturer: Sonix Technology Co., Ltd.
[ 5887.904814] usb 1-1.1: SerialNumber: SN0001
[ 5888.232284] usb 1-1.2: New USB device found, idVendor=2bc5, idProduct=060b
[ 5888.239161] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 5888.246856] usb 1-1.2: Product: ORBBEC Depth Sensor
[ 5888.251853] usb 1-1.2: Manufacturer: Orbbec(R)
I cross-compiled a simple app:
int main(int argc, char** argv)
{
const char* deviceURI = openni::ANY_DEVICE;
Status result = STATUS_OK;
result = OpenNI::initialize();
cout << "OpenNI::initialize() = " << result << endl;
openni::Array<openni::DeviceInfo> deviceList;
openni::OpenNI::enumerateDevices(&deviceList);
cout << "OpenNI::enumerateDevices() = " << deviceList.getSize() << endl;
for (int i = 0; i < deviceList.getSize(); ++i)
{
cout << "Device " << deviceList[i].getUri() << " already connected" << endl;
}
When I ran it first I got:
error while loading shared libraries: libOpenNI2.so: cannot open shared object file: No such file or director
So I copied libOpenNI2.so to /usr/lib. Now when I run it I get:
OpenNI::initialize() = 1
OpenNI::enumerateDevices() = 0
Why isn't the camera being seen? Is there something else I have to do to get it to work?
I turned on logging using:
OpenNI::setLogMinSeverity(0);
OpenNI::setLogConsoleOutput(true);
and saw:
3774 INFO Log XnLog.cpp 349 New log started on 2019-11-25 09:57:11
3864 INFO Log XnLog.cpp 322 --- Filter Info --- Minimum Severity: VERBOSE
4044 VERBOSE OniContext OniContext.cpp 165 OpenNI 2.3.0 (Build 63)-Linux-Arm (May 13 2019 17:45:57)
4089 VERBOSE OniContext OniContext.cpp 259 Using '/usr/lib/OpenNI2/Drivers' as driver path
4112 VERBOSE OniContext OniContext.cpp 267 Looking for drivers at '/usr/lib/OpenNI2/Drivers'
4167 ERROR OniContext OniContext.cpp 279 Found no drivers matching '/usr/lib/OpenNI2/Drivers/lib*.so'
So I copied the files from OpenNI-Linux-Arm64-2.3.0.63/Redist/OpenNI2/Drivers/ to /usr/lib/OpenNI2/Drivers/. The Readme also says:
*for using with Astra Embedded S/Stereo S, please change the resolution in 'orbbec.ini' to 'Resolution=17' for Depth and IR streams
So I edited this in /usr/lib/OpenNI2/Drivers/orbbec.ini. Now I get:
3924 INFO Log XnLog.cpp 349 New log started on 2019-11-25 10:23:55
4010 INFO Log XnLog.cpp 322 --- Filter Info --- Minimum Severity: VERBOSE
4185 VERBOSE OniContext OniContext.cpp 165 OpenNI 2.3.0 (Build 63)-Linux-Arm (May 13 2019 17:45:57)
4230 VERBOSE OniContext OniContext.cpp 259 Using '/usr/lib/OpenNI2/Drivers' as driver path
4254 VERBOSE OniContext OniContext.cpp 267 Looking for drivers at '/usr/lib/OpenNI2/Drivers'
4547 VERBOSE OniContext OniContext.cpp 309 Loading device driver 'libOniFile.so'...
4588 WARNING xnOS XnLinuxSharedLibs.cpp 107 loading lib from: /usr/lib/OpenNI2/Drivers/libOniFile.so
6199 VERBOSE OniContext OniContext.cpp 309 Loading device driver 'libPSLink.so'...
6240 WARNING xnOS XnLinuxSharedLibs.cpp 107 loading lib from: /usr/lib/OpenNI2/Drivers/libPSLink.so
11412 WARNING DriverHandler OniDriverHandler.cpp 85 LibraryHandler: Couldn't find function oniDriverStreamConvertC2DCoordinates in libPSLink.so. Stopping
11539 WARNING OniContext OniContext.cpp 313 Couldn't use file 'libPSLink.so' as a device driver
11626 VERBOSE OniContext OniContext.cpp 309 Loading device driver 'liborbbec.so'...
11675 WARNING xnOS XnLinuxSharedLibs.cpp 107 loading lib from: /usr/lib/OpenNI2/Drivers/liborbbec.so
15571 INFO Log XnLog.cpp 349 New log started on 2019-11-25 10:23:55
15615 INFO Log XnLog.cpp 322 --- Filter Info --- Minimum Severity: VERBOSE
15645 VERBOSE xnUSB XnLinuxUSB.cpp 383 Initializing USB...
19162 INFO xnUSB XnLinuxUSB.cpp 412 USB is initialized.
OpenNI::initialize() = 0
OpenNI::enumerateDevices() = 0
which is better but still not successful. I then realised that I hadn't reconnected the camera after copying the driver files so I did that and it worked.

Read noonnamed pipe in terminal

Hellow.
I have very simple C program. I create pipe in program (standard, non-named). Can I read pipe of existing process in terminal (stream with > or cat?). I try it but my command do nothing. Im know tkat i can create named pipe who is very easy for external I/O.
I have number of pipe for /proc/number/fd
Why I need it? Just from debug (but not only, i know that gdb can look pipe). When i fork process, children inherits pts (terminal) and std io/out. Change pts is possible but it is bad way. So I will open next terminal and stream existing process pipie in it.
It is possible (and decent, dizzy way dont interesting me) or I must used named pipe?
Can I read pipe of existing process in terminal (stream with > or
cat?)
Yes, you can. Example rnpit.c:
#include <string.h>
main()
{
int pipefd[2];
pipe(pipefd);
write(pipefd[1], "pipe", strlen("pipe"));
sleep(99); // give us time to read the pipe
}
>rnpit&
[1] 1077
>ll /proc/${!}/fd
total 0
lrwx------ 1 armali ARNGO_res4 64 Apr 4 09:22 0 -> /dev/pts/5
lrwx------ 1 armali ARNGO_res4 64 Apr 4 09:22 1 -> /dev/pts/5
lrwx------ 1 armali ARNGO_res4 64 Apr 4 09:22 2 -> /dev/pts/5
lr-x------ 1 armali ARNGO_res4 64 Apr 4 09:22 3 -> pipe:[399466140]
l-wx------ 1 armali ARNGO_res4 64 Apr 4 09:22 4 -> pipe:[399466140]
>cat /proc/${!}/fd/3
pipe

Bash at job not running - Array

I have built a bash script that runs fine when executed from the command line but does not work when run as batch job (with at). First I thought because of the environment but when debugging I think there is a problem with arrays I need to create. When run from command line log is created and its content is what I expected but when run with at any log is created. Any idea for what is causing this issue?
A short script with the piece of code I suppose it is not running is below
#!/bin/bash
fsol=`date +%Y%m%d`
for dia
in 0 1 2
do
var=$(date -d "$fsol +$dia days" +'%Y-%m-%d')
orto=`awk -v j=$var 'BEGIN { FS=","} $2 == j { print $3}' hora-sol.dat`
h_orto=${orto:0:2}
m_orto=${orto:2:2}
a_orto+=($h_orto $m_orto)
echo "dia $dia" $var $h_orto $m_orto >> log1.txt
done
echo ${a_orto[#]} >> log2.txt
Data in hora-sol.dat
32,2016-02-01,0711,1216,1722,10.1885659530428
33,2016-02-02,0710,1216,1723,10.2235441870822
34,2016-02-03,0709,1216,1724,10.2589836910036
35,2016-02-04,0708,1216,1725,10.2948670333624
36,2016-02-05,0707,1216,1727,10.3311771153741
37,2016-02-06,0706,1217,1728,10.3678971831004
38,2016-02-07,0705,1217,1729,10.4050108377139
39,2016-02-08,0704,1217,1730,10.4425020444393
40,2016-02-09,0703,1217,1731,10.4803551390436
41,2016-02-10,0701,1217,1733,10.5185548339287
42,2016-02-11,0700,1217,1734,10.5570862213108
43,2016-02-12,0659,1217,1735,10.5959347763989
44,2016-02-13,0658,1217,1736,10.6350863580571
45,2016-02-14,0657,1217,1737,10.6745272092687
46,2016-02-15,0655,1217,1738,10.7142439549499
47,2016-02-16,0654,1217,1740,10.7542236006922
48,2016-02-17,0653,1217,1741,10.7944535282585
49,2016-02-18,0652,1216,1742,10.8349214920733
50,2016-02-19,0650,1216,1743,10.8756156133281
51,2016-02-20,0649,1216,1744,10.9165243743526
52,2016-02-21,0648,1216,1745,10.9576366115941
53,2016-02-22,0646,1216,1746,10.9989415078031
54,2016-02-23,0645,1216,1747,11.0404285846154
55,2016-02-24,0644,1216,1749,11.0820876932144
56,2016-02-25,0642,1216,1750,11.123909005324
57,2016-02-26,0641,1215,1751,11.1658830035395
58,2016-02-27,0639,1215,1752,11.2080004711946
59,2016-02-28,0638,1215,1753,11.2502524821626
60,2016-02-29,0636,1215,1754,11.2926303895977
Running manually, it generated:
# cat log.txt
dia 0 2016-02-12 0659 1217 1735
dia 1 2016-02-13 0658 1217 1736
dia 2 2016-02-14 0657 1217 1737
06
59
06
58
06
57
Scheduling with at:
# echo "/tmp/horasol/script.sh" | at now +1 minute
warning: commands will be executed using /bin/sh
job 1 at Fri Feb 12 12:11:00 2016
It generated exactly the same:
# cat log.txt
dia 0 2016-02-12 0659 1217 1735
dia 1 2016-02-13 0658 1217 1736
dia 2 2016-02-14 0657 1217 1737
06
59
06
58
06
57
Note that warninig informing that 'at' uses /bin/sh:
warning: commands will be executed using /bin/sh
Tell us how you conclude that "does not work when run as batch job (with at)"
Tell us more about your "when debugging" moment.
Perhaps I'm reproducing here using a different proccess as you. And due to this difference it works for me.

Automatically attaching to process on SEGV and other fatal signals (panic_action)

Background
Code to support a 'panic_action' was recently added to the FreeRADIUS v3.0.x, v2.0.x and master branches.
When radiusd (main FreeRADIUS process) receives a fatal signal (SIGFPE, SIGABRT, SIGSEGV etc...), the signal handler executes a predefined 'panic_action' which is a snippet of shell code passed to system(). The signal handler performs basic substitution for %e and %p writing in the values of the current binary name, and the current PID.
This should in theory allow a debugger like gdb or lldb to attach to the process (panic_action = lldb -f %e -p %p), either to perform interactive debugging, or to automate collection of a backtrace. This actually works well on my system OSX 10.9.2 with lldb, but only for SIGABRT.
Problem
This doesn't seem to work for other signals like SIGSEGV. The mini backtrace from execinfo is valid, but when lldb or gdb attach to the process, they only get the backtrace from for the signal handler.
There doesn't seem to be a way in lldb to switch to an arbitrary frame address.
Does anyone know if there's any way of forcing the signal handler to execute in the same stack as the the thread that received the signal? Or why when lldb attaches the backtraces don't show the full stack.
The actual output looks like:
FATAL SIGNAL: Segmentation fault: 11
Backtrace of last 12 frames:
0 libfreeradius-radius.dylib 0x000000010cf1f00f fr_fault + 127
1 libsystem_platform.dylib 0x00007fff8b03e5aa _sigtramp + 26
2 radiusd 0x000000010ce7617f do_compile_modsingle + 3103
3 libfreeradius-server.dylib 0x000000010cef3780 fr_condition_walk + 48
4 radiusd 0x000000010ce7710f modcall_pass2 + 191
5 radiusd 0x000000010ce7713f modcall_pass2 + 239
6 radiusd 0x000000010ce7078d virtual_servers_load + 685
7 radiusd 0x000000010ce71df1 setup_modules + 1633
8 radiusd 0x000000010ce6daae read_mainconfig + 2526
9 radiusd 0x000000010ce78fe6 main + 1798
10 libdyld.dylib 0x00007fff8580a5fd start + 1
11 ??? 0x0000000000000002 0x0 + 2
Calling: lldb -f /usr/local/freeradius/sbin/radiusd -p 1397
Current executable set to '/usr/local/freeradius/sbin/radiusd' (x86_64).
Attaching to process with:
process attach -p 1397
Process 1397 stopped
(lldb) bt
error: libfreeradius-radius.dylib debug map object file '/Users/arr2036/Documents/Repositories/freeradius-server-fork/build/objs//Users/arr2036/Documents/Repositories/freeradius-server-master/src/lib/debug.o' has changed (actual time is 0x530f3d21, debug map time is 0x530f37a5) since this executable was linked, file will be ignored
* thread #1: tid = 0x8d824, 0x00007fff867fee38 libsystem_kernel.dylib`wait4 + 8, queue = 'com.apple.main-thread, stop reason = signal SIGSTOP
frame #0: 0x00007fff867fee38 libsystem_kernel.dylib`wait4 + 8
frame #1: 0x00007fff82869090 libsystem_c.dylib`system + 425
frame #2: 0x000000010cf1f2e1 libfreeradius-radius.dylib`fr_fault + 849
frame #3: 0x00007fff8b03e5aa libsystem_platform.dylib`_sigtramp + 26
(lldb)
Code
The relevant code for fr_fault() is here:https://github.com/FreeRADIUS/freeradius-server/blob/b7ec8c37c7204accbce4be4de5013397ab662ea3/src/lib/debug.c#L227
and fr_set_signal() the function used to setup signal handlers is here: https://github.com/FreeRADIUS/freeradius-server/blob/0cf0e88704228e8eac2948086e2ba2f4d17a5171/src/lib/misc.c#L61
As the links contain commit hashes the code should be static
EDIT
Finally with version lldb-330.0.48 on OSX 10.10.4 lldb can now go past _sigtram.
frame #2: 0x000000010b96c5f7 libfreeradius-radius.dylib`fr_fault(sig=11) + 983 at debug.c:735
732 FR_FAULT_LOG("Temporarily setting PR_DUMPABLE to 1");
733 }
734
-> 735 code = system(cmd);
736
737 /*
738 * We only want to error out here, if dumpable was originally disabled
(lldb)
frame #3: 0x00007fff8df77f1a libsystem_platform.dylib`_sigtramp + 26
libsystem_platform.dylib`_sigtramp:
0x7fff8df77f1a <+26>: decl -0x16f33a50(%rip)
0x7fff8df77f20 <+32>: movq %rbx, %rdi
0x7fff8df77f23 <+35>: movl $0x1e, %esi
0x7fff8df77f28 <+40>: callq 0x7fff8df794d8 ; symbol stub for: __sigreturn
(lldb)
frame #4: 0x000000010bccb027 rlm_json.dylib`_json_map_proc_get_value(ctx=0x00007ffefa62dbe0, out=0x00007fff543534b8, request=0x00007ffefa62da30, map=0x00007ffefa62aaf0, uctx=0x00007fff54353688) + 391 at rlm_json.c:191
188 }
189 vp = map->op;
190
-> 191 if (value_data_steal(vp, &vp->data, vp->da->type, value) < 0) {
192 REDEBUG("Copying data to attribute failed: %s", fr_strerror());
193 talloc_free(vp);
194 goto error;
This is a bug in lldb related to backtracing through _sigtramp, the asynchronous signal handler in user processes. Unfortunately I can't suggest a workaround for this problem. It has been fixed in the top of tree sources for lldb at http://lldb.llvm.org/ if you're willing to build from source (see the "Source" and "Build" sidebars). But Xcode 5.0 and the next dot release are going to have real problems backtracing past _sigtramp.

Resources