SAMA5d31:The Uboot cannot be started - u-boot

I currently use a custom board based on SAMa5D31:
Emmc is currently used for boot
The Uboot fails to be started
Stuck in SD/MMC: Done to load image without any reaction

Modify the following
diff --git a/board/sama5d3xek/sama5d3xek.c b/board/sama5d3xek/sama5d3xek.c
index 57093b58..153749ce 100644
--- a/board/sama5d3xek/sama5d3xek.c
+++ b/board/sama5d3xek/sama5d3xek.c
## -75,12 +75,12 ## static void ddramc_reg_config(struct ddramc_register *ddramc_config)
| AT91C_DDRC2_MD_DDR2_SDRAM);
ddramc_config->cr = (AT91C_DDRC2_NC_DDR10_SDR9
- | AT91C_DDRC2_NR_14
+ | AT91C_DDRC2_NR_13
| AT91C_DDRC2_CAS_3
| AT91C_DDRC2_DISABLE_RESET_DLL
| AT91C_DDRC2_ENABLE_DLL
| AT91C_DDRC2_ENRDM_ENABLE /* Phase error correction is enabled */
- | AT91C_DDRC2_NB_BANKS_8
+ | AT91C_DDRC2_NB_BANKS_4
| AT91C_DDRC2_NDQS_DISABLED /* NDQS disabled (check on schematics) */
| AT91C_DDRC2_DECOD_INTERLEAVED /* Interleaved decoding */
| AT91C_DDRC2_UNAL_SUPPORTED); /* Unaligned access is supported */

Related

Can I connect directly ESP32 Modbus output from UART pins to TTL-USB converter?

I am creating a slave device for a Modbus network. This is my first encounter with this protocol, so I am not really sure about few things.
So, this a recommended schematic for a proper Modbus RTU connection using RS-485.
+---------+ +----x------+ +-----x-----+
| RX |<---------|RO | | RO|--> RXD
| ESP32 | | B|------|B |
| TX |--------->|DI MAX485 | \ / | MAX485 DI|<-- TXD
| | | |RS-485| | MODBUS MASTER
+---------+ RTS -+->|DE | / \ | DE|--+
| | A|------|A | |
+--|/RE | | /RE|--+- RTS
+----x------+ +-----x-----+
I currently don't have any RS-485 converters on my hand and I am trying to test my modbus implementation using setup like this:
+---------+ +---------+ +---------+
| RX |<------| TX | | |
| ESP32 | | TTL-USB |<=====>| PC |
| | | | USB | |
| TX |------>| RX | | |
+---------+ +---------+ +---------+
Does it have any right to work like this? Those 2 RS-485 converters should have no impact or am I missing something? How important is RTS in this type of serial transmission?
If that setup is ok, then I have no idea why I can't communicate with my ESP32 slave device. This is code that I am running currently (removed unnecessary parts for simplicity).
Defines:
#define MB_PORT_NUM UART_NUM_1
#define MB_SLAVE_ADDR (2)
#define MB_DEV_SPEED (9600)
#define UART_TXD_GPIO_NUM 19
#define UART_RXD_GPIO_NUM 18
Content of a task responsible for communicating with modbus master. Almost identical to: https://github.com/espressif/esp-idf/tree/release/v4.4/examples/protocols/modbus/serial/mb_slave
mb_param_info_t reg_info; // keeps the Modbus registers access information
void *mbc_slave_handler = NULL;
ESP_ERROR_CHECK(mbc_slave_init(MB_PORT_SERIAL_SLAVE, &mbc_slave_handler)); // Initialization of Modbus controller
mb_communication_info_t comm_info;
comm_info.mode = MB_MODE_RTU;
comm_info.slave_addr = MB_SLAVE_ADDR;
comm_info.port = MB_PORT_NUM;
comm_info.baudrate = MB_DEV_SPEED;
comm_info.parity = MB_PARITY_NONE;
ESP_ERROR_CHECK(mbc_slave_setup((void *)&comm_info));
mb_register_area_descriptor_t reg_area; // Modbus register area descriptor structure
reg_area.type = MB_PARAM_INPUT;
reg_area.start_offset = 0;
/* there is a struct defined somewhere else */
reg_area.address = (void *)&input_reg_params.temp_r1;
reg_area.size = sizeof(uint16_t);
ESP_ERROR_CHECK(mbc_slave_set_descriptor(reg_area));
ESP_ERROR_CHECK(mbc_slave_start());
// RTC and CRC pins are unconnected
ESP_ERROR_CHECK(uart_set_pin(MB_PORT_NUM, UART_TXD_GPIO_NUM, UART_RXD_GPIO_NUM, -1, -1));
// Changed UART_MODE from RS485_DUPLEX, to UART_MODE_UART
ESP_ERROR_CHECK(uart_set_mode(MB_PORT_NUM, UART_MODE_UART));
while (true) {
mb_event_group_t event = mbc_slave_check_event((mb_event_group_t)MB_READ_WRITE_MASK);
/* I never get past this point. Stuck at check_event*/
}
In order to test it I am using mbpoll program on Linux (https://github.com/epsilonrt/mbpoll).
Command (meaning of parameters: slave adress=2, read input, offset=0, baudrate=9600, no parity):
mbpoll -a 2 -t 3 -r 0 -0 -b 9600 -P none /dev/ttyUSB0
When I run it I get 'Connection timed out' error, but I don't see any debug info on my ESP32 about incoming transmission. /dev/ttyUSB0 is a correct device, when I 'cat' this file I see something happening on UART.

Is there any gcc compiler warning which could have caught this memory bug?

I haven't programmed C for quite some time and my pointer-fu had degraded. I made a very elementary mistake and it took me well over an hour this morning to find what I'd done. The bug is minimally reproduced here: https://godbolt.org/z/3MdzarP67 (I am aware the program is absurd memory-management wise, just showing what happens).
The first call to realloc() breaks because of course, the pointer it's given points to stack memory, valgrind made this quite obvious.
I have a rule with myself where any time I track down a bug, if there is a warning that could have caught it I enable it on my projects. Often times this is not the case, since many bugs come from logic errors the compiler can't be expected to check.
However here I am a bit surprised. We malloc() and then immediately reassign that pointer which leaves the allocated memory inaccessible. It's obvious the returned pointer does not live outside the scope of that if block, and is never free()'d. Maybe it's too much to expect the compiler to analyze the calls and realize we're attempting to realloc() stack memory but I am surprised that I can't find anything to yell at me about the leaking of the malloc() returned pointer. Even clang's static analyzer scan-build doesn't pick up on it, I've tried various relevant options.
The best I could find was -fsanitize=address which at least prints out some cluing information during the crash instead of:
mremap_chunk(): invalid pointer
on Godbolt, or
realloc(): invalid old size
Aborted (core dumped)
on my machine, both of which are somewhat cryptic (although yes they do show plainly that there is some memory issue occurring). Still, this compiles without issues.
Since Godbolt links don't live forever here is the critical section of the code:
void add_foo_to_bar(struct Bar** b, Foo* f) {
if ((*b)->foos == NULL) {
(*b)->foos = (Foo*)malloc(sizeof(Foo));
// uncomment for correction
//(*b)->foos[(*b)->n_foos] = *f;
// obvious bug here, we leak memory by losing the returned pointer from malloc
// and assign the pointer to a stack address (&f1)
// comment below line for correction
(*b)->foos = f; // completely wrong
(*b)->n_foos++;
} else {
(*b)->foos = (Foo*)realloc((*b)->foos, ((*b)->n_foos + 1) * sizeof(Foo));
(*b)->foos[(*b)->n_foos] = *f;
(*b)->n_foos++;
}
}
the error occurs because f is a pointer to stack memory (intentional) but we obviously can't assign something that was supposed to have been malloc()'d to that.
Try -fanalyzer if your compiler is recent enough. When running it I get:
../main.c:30:28: warning: ‘realloc’ of ‘&f1’ which points to memory not on the heap [CWE-590] [-Wanalyzer-free-of-non-heap]
30 | (*b)->foos = (Foo*)realloc((*b)->foos, ((*b)->n_foos + 1) * sizeof(Foo));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
‘main’: events 1-2
|
| 37 | int main() {
| | ^~~~
| | |
| | (1) entry to ‘main’
|......
| 45 | add_foo_to_bar(&b, &f1);
| | ~~~~~~~~~~~~~~~~~~~~~~~
| | |
| | (2) calling ‘add_foo_to_bar’ from ‘main’
|
+--> ‘add_foo_to_bar’: events 3-5
|
| 19 | void add_foo_to_bar(struct Bar** b, Foo* f) {
| | ^~~~~~~~~~~~~~
| | |
| | (3) entry to ‘add_foo_to_bar’
| 20 | if ((*b)->foos == NULL) {
| | ~
| | |
| | (4) following ‘true’ branch...
| 21 | (*b)->foos = (Foo*)malloc(sizeof(Foo));
| | ~~~~
| | |
| | (5) ...to here
|
<------+
|
‘main’: events 6-7
|
| 45 | add_foo_to_bar(&b, &f1);
| | ^~~~~~~~~~~~~~~~~~~~~~~
| | |
| | (6) returning to ‘main’ from ‘add_foo_to_bar’
| 46 | add_foo_to_bar(&b, &f2);
| | ~~~~~~~~~~~~~~~~~~~~~~~
| | |
| | (7) calling ‘add_foo_to_bar’ from ‘main’
|
+--> ‘add_foo_to_bar’: events 8-11
|
| 19 | void add_foo_to_bar(struct Bar** b, Foo* f) {
| | ^~~~~~~~~~~~~~
| | |
| | (8) entry to ‘add_foo_to_bar’
| 20 | if ((*b)->foos == NULL) {
| | ~
| | |
| | (9) following ‘false’ branch...
|......
| 30 | (*b)->foos = (Foo*)realloc((*b)->foos, ((*b)->n_foos + 1) * sizeof(Foo));
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | | |
| | | (10) ...to here
| | (11) call to ‘realloc’ here
|
No, but, runtime testing can save you.
If you can spare the execution overhead, I have seen many applications add an extra layer to memory allocation to track the allocations made and find leaks/errors.
Usually they replace malloc() and free() with a macros that include FILE and LINE
One example can be seen here (check the Heap.c and Heap.h files)
https://github.com/eclipse/paho.mqtt.c/tree/master/src
Googling "memory heap debugger" will probably turn up other examples. Or you could roll your own.

darknet doesn't use p5000 gpu with cuda

i run this command
./darknet detector train data/obj.data cfg/yolov3_training.cfg back/last_4_4_7pm.weights /back -dont_show -gpus 0
but gou is not been used and 0 %
here is my makefile;:
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
here is the out put
CUDA-version: 11020 (11000)
Warning: CUDA-version is higher than Driver-version!
, cuDNN: 8.1.0, GPU count: 1
OpenCV version: 3.4.11
0
yolov3_training
0 : compute_capability = 610, cudnn_half = 0, GPU: Quadro P5000
net.optimized_memory = 0
mini_batch = 4, batch = 64, time_steps = 1, train = 1
layer filters size/strd(dil) input output
0 Create CUDA-stream - 0
Create cudnn-handle 0
here is my nvidia smi:
root#n5qr6jidhm:/notebooks/Untitled Folder/darknet# nvidia-smi
Fri Jun 25 17:53:45 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P5000 On | 00000000:00:05.0 Off | Off |
| 39% 62C P0 126W / 180W | 5725MiB / 16278MiB | 92% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
First, start with upgrading(downgrading) your drives by gpu model. That can be easily found on google. Then check driver versions with darknet min. req. before installing them. Don't use -gpus flag as nothing changes.

Container STATE is Error when the Test Result has Failure

I have set up Kubernetes environment locally using minikube. I create a Job which has 3 containers.
Hub
Chrome
App ( Selenium-TestNG)
When I apply/create job, which will set up Hub/Chrome/App. Execute selenium tests.
If all the tests are completed successfully, you will see the status of Container like below
The result from webAutomation1 app
The above things are as expected. Looks good in terms of container listing
Now, if we have completed the app(Execution of tests) with some failures like below
Then, my container listing will show it as error
I use ITestListenr to write logs to console as of now. What is that making container STATE to turn Error.? Is there anything I don't see in terms of integration between container and app?
It would be greatly appreciated if someone helps me with this.
As per TestNG exit codes:
When TestNG completes execution, it exits with a return code. This
return code can be inspected to get an idea on the nature of failures
(if there were any). The following table summarises the different exit
codes that TestNG currently uses.
/**
* |---------------------|---------|--------|-------------|------------------------------------------|
* | FailedWithinSuccess | Skipped | Failed | Status Code | Remarks |
* |---------------------|---------|--------|-------------|------------------------------------------|
* | 0 | 0 | 0 | 0 | Passed tests |
* | 0 | 0 | 1 | 1 | Failed tests |
* | 0 | 1 | 0 | 2 | Skipped tests |
* | 0 | 1 | 1 | 3 | Skipped/Failed tests |
* | 1 | 0 | 0 | 4 | FailedWithinSuccess tests |
* | 1 | 0 | 1 | 5 | FailedWithinSuccess/Failed tests |
* | 1 | 1 | 0 | 6 | FailedWithinSuccess/Skipped tests |
* | 1 | 1 | 1 | 7 | FailedWithinSuccess/Skipped/Failed tests |
* |---------------------|---------|--------|-------------|------------------------------------------|
*/
Your container is probably using the TestNG as the main process, and any test that is not considered Passed tests (i.e., exit code different than 0) will result in a pod with the terminated/error state.
You can confirm this by Determining the Reason for Pod Failure.
e.g: You can check your pod state; the output will be something like this:
$ kubectl get my-pod-name -o=json | jq .status.containerStatuses[].state
{
"terminated": {
"containerID": "docker://9bc2497ec0d2bc3b1b62483c217aaaaa1027102a5f7ff1688f47b94254",
"exitCode": 1,
"finishedAt": "2019-10-28T02:00:10Z",
"reason": "Error",
"startedAt": "2019-10-28T02:00:05Z"
}
}
and check if the exit code matches your TestNG status code.

Address space for shared libraries loaded multiple times in the same process

First off, I've already found a few references which might answer my question. While I plan on reading them soon (i.e. after work), I'm still asking here in case the answer is trivial and does not require too much supplementary knowledge.
Here is the situation: I am writing a shared library (let's call it libA.so) which needs to maintain a coherent internal (as in, static variables declared in the .c file) state within the same process.
This library will be used by program P (i.e. P is compiled with -lA). If I understand everything so far, the address space for P will look something like this:
______________
| Program P |
| < |
| variables, |
| functions |
| from P |
| > |
| |
| < |
| libA: |
| variables, |
| functions |
| loaded (ie |
| *copied*) |
| from shared |
| object |
| > |
| < |
| stuff from |
| other |
| libraries |
| > |
|______________|
Now P will sometimes call dlopen("libQ.so", ...). libQ.so also uses libA.so (i.e. was compiled with -lA). Since everything happens within the same process, I need libA to somehow hold the same state whether the calls come from P or Q.
What I do not know is how this will translate in memory. Will it look like this:
______________
| Program P |
| < |
| P stuff |
| > |
| |
| < |
| libA stuff, |
| loaded by P |
| > | => A's code and variables are duplicated
| |
| < |
| libQ stuff |
| < |
| libA stuff,|
| loaded by Q|
| > |
| > |
|______________|
... or like this?
______________
| Program P |
| < |
| P stuff |
| *libA |
| *libQ |
| > |
| |
| < |
| libA stuff, |
| loaded by P |
| > | => A's code is loaded once, Q holds some sort of pointer to it
| |
| < |
| libQ stuff |
| *libA |
| > |
|______________|
In the second case, keeping a consistent state for a single process is trivial; in the first case, it will require some more effort (e.g. some shared memory segment, using the process id as the second argument to ftok()).
Of course, since I have a limited knowledge on how linking and loading works, the diagrams above may be completely wrong. For all I know, the shared libraries could be at a fixed space in memory, and every process accesses the same data and code. The behaviour could also depends on how A and/or P and/or Q were compiled. And this behaviour is probably not platform independent.
The code segment of a shared library exists in memory in a single instance per system. Yet, it can be mapped to different virtual addresses for different processes, so different processes see the same function at different addresses (that's why the code that goes to a shared library must be compiled as PIC).
Data segment of a shared library is created in one copy for each process, and initialized by whatever initial values where specified in the library.
This means that the callers of a library do not need to know if it is shared or not: all callers in one process see the same copy of the functions and the same copy of external variables defined in the library.
Different processes execute the same code, but have their individual copies of data, one copy per process.

Resources