I am trying to build a pc configurator using protege, and I am trying to compare the socket type of the CPU with the Motherboard.
I have a CPU class with data property socket type and so does the motherboard. As for the moment, however, I am failing to compare both as the PC assembled should have a CPU and Motherboard with the same socket type. Does anyone know if or how this is possible?
edit 1: Yes, I have a class hierarchy with PC component as parent and cpu, motherboard and others as childs. With compare I mean that when assembling a PC with the components I have defined, given a motherboard, it would have to pick a CPU that has the same socket type property as the motherboard (if motherboard has base AM4 the CPU socket should be == AM4). I dont really have a prefered way to model, I am very new to protege but I am currently trying to learn and was stuck in this problem.
Related
I'm studying embedded programming, so I'm new in this field.
Can someone explain how to identify tasks/threads from given system description. Also, how can I estimate timing constraints, execution times... I'm really stuck.
Here is system description I'm working on:
Implement a 2 degree of freedom servo motor system. A two-axis joystick is used for controlling servo motors. Additionally, enable recording and reproducing the user path of the joystick, so that identical movement could be replicated multiple times. It is necessary to support the recording of 3 motion profiles with a length of at least 5 minutes. The profile needs to be recorded to a non-volatile memory, and the recording/playback control via the joystick button. Provide appropriate signalling for the current selected profile and operating mode (recording / playback) using one LED for each profile. In the RT-Thread system realize the necessary drivers on the Raspberry Pi Pico platform as support for the devices used and the application itself that implements the described system with clear separated threads for each of the observed functionalities.
It is tempting to partition functionally, but in practice you should partition based on deadlines, determinism and update rates. For simple systems, that may turn out to be the same as a functional partitioning. The part:
clear separated threads for each of the observed functionalities
May lead you to an inappropriate partitioning. However that may be the partitioning your tutor expects even if it is sub-optimal.
There is probably no single solution, but obvious candidates for tasks are:
Joystick reader,
Servo controller,
Recorder,
Replayer.
Now considering these candidates, it can be seen that the joystick control and replay control are mutually exclusive, and also that replay itself is selected through the joystick button. Therefore it makes sense to make that a single task. Not least because the replayer will communicate with the servo controller in the same way as the joystick. So you might have:
Joystick reader / replayer,
Servo controller,
Recorder.
The recorder is necessarily a separate thread because access to NV memory may be slower and non-deterministic. You would need to feed time and x/y position data in a message queue of sufficient length to ensure the recording does not affect the timing motion control.
It is not clear what king of servo you are using or if your application is responsible for the PID motion control or simply sends a position signal to a servo controller. If the latter, there may be no reason to separate teh servo control from the reader/replayer. In which case you would have:
Joystick reader / replayer / Servo controller,
Recorder.
Other solutions are possible. For example the recorder might record the servo position over time rather the joystick position, and have the servo controller handle replay.
Joystick reader,
Servo controller / replayer,
Recorder.
That makes sense if the Joystick polling and Servo update rates differ, because you'd what to replay what the servo did, not what the joystick did.
I'm writing because I am currently working on a project using the gem5 simulator to simulate an ARM bigLITTLE configuration where the big CPU cluster has an L2 but the little CPU cluster does not. That is, I would like to simulate a system in which the little cores are even simpler than their default configuration. I am running the project using the full system bigLITTLE file (i.e., gem5/configs/example/fs_bigLITTLE.py). Is it possible to configure the system in this way?
My initial thought was to modify the python file so that the little cluster configuration is composed of the following:
class LittleCluster(devices.CpuCluster):
def __init__(self, system num_cpus, cpu_clock, cpu_voltage="1.0V"):
cpu_config = [ ObjectList.cpu_list.get("MinorCPU"), devices.L1I, devices.L1D, devices.WalkCache, None]
super(LittleCluster, self).__init__(system, num_cpus, cpu_clock, cpu_voltage, *cpu_config)
or, in layman's terms, provide None as the SimObject class name for the L2. Unfortunately, as one might expect, this causes the system to crash as gem5 expects an object to connect the ports.
My next idea was to write a new SimObject called EmptyCache that inherits the Cache class from gem5, but does nothing. That is, on every call access to the cache this object would return false, and it would be configured to have no tag, data, or response latency. However, this causes coherence issues with the L1 caches in the little cluster, so I then changed it so that it evicts any cache block that it "hits" before returning false (the following is based on a prior post to the gem5-users mailing list: https://www.mail-archive.com/gem5-users#gem5.org/msg16882.html)
EmptyCache::access(PacketPtr pkt, CacheBlk *&blk, Cycles &lat, PacketList &writebacks)
{
if (Cache::access(pkt, blk, lat, writebacks) {
Cache::evictBlock(blk);
}
return false;
}
This seems to solve the coherence issues above with the L1 caches, but this seemed to cause coherence issues in the memory bus (which implements the SystemXBar class (which implements the CoherentXBar class)).
At this point, I feel pretty much stuck and would appreciate any advice that you could provide! Thank you!
Edit: I've continued to try to make headway with this project despite the setbacks in this direction by modifying the gem5/configs/example/arm/devices.py file. I made note that one way to approach this issue would be to add a MinorCPU private-member cache, and then setting the MinorCPU cluster directly to the memory bus while connecting the major CPU cluster to a cache system, but the issue in this direction is the incoherence between packets to the minor and major clusters.
Namely, there are several assertions and panic_if statements in the caches that anticipate that a particular MemCmd enumeration and/or that the packet "hasSharers". Naturally, I assume that this issue has to do with this simulation setup, because the simulator actually runs without it, but is a way to configure the simulator in this direction so that there is some semblance of cohesion between the major and minor CPU clusters?
Again, thank you for your help!
I have a project (a 2-player game) made in FreeRTOS. The game has 3 tasks
(Game Render, Joystick Task and a PC Serial Communication).
Shared resources include:
Player 1 and Player 2 locations/coordinates. They are manipulated by Serial and Joystick task respectively. The game render reads both of these locations and displays them. (Player 1 location is shared with Game Render and Player 2 with Game Render).
A queue that is shared between the game render and the serial task (sending data and getting acks); the queue has been protected with a mutex on all write operations.
My question is which of these 2 scheduling is more suitable for this project: Rate Monotonic or Deadline Monotonic?
The tasks are not independent in a way that the serial communication uses acks? I think it should be Deadline Monotonic but not entirely sure?
To choose between DMS and RMS you need to know periods and deadlines for each task. From my experience it is better to focus on good overall design first and then measure and tweak the priorities to achieve best response times.
On of best summaries of good design principles I've encountered is this. In your case I would represent the two players as 'active objects' with own input event queues. Send event to the players from serial task, or even directly from ISR. The game render would then also be an AO receiving events from players, or a mutex-protected resource - it depends on what the render output is (how long does it take mostly). Serial input and serial output should be considered two separate things - in most cases it doesn't make sense to conflate the two.
Here is another link that might be useful - look at '1.4 The Design of the “Fly ‘n’ Shoot” Game'
Also, you don't need a mutex lock for xQueueSend and for sending from ISR you only need to use xQueueSendFromISR.
In a professional context, I have to use the vl53L0x. This sensor was released recently, along with it's API, meaning that there's no help on the internet yet :
http://www.st.com/content/st_com/en/products/embedded-software/proximity-sensors-software/stsw-img005.html
This API contains some source and headers file, that I compiled with the gcc. It works fine, despite clearly lacking comments. I flash the memory of a stm32 (NUCLEO-F401RE), which controls a vl53L0x sensor via an I2C bus. I now want to add more vl53L0x sensors on the same I2C bus, and refer to this document (if you want to read it, go directly to the bottom half of the page 5, the wiring is already done) :
http://www.st.com/content/ccc/resource/technical/document/application_note/group0/0e/0a/96/1b/82/19/4f/c2/DM00280486/files/DM00280486.pdf/jcr:content/translations/en.DM00280486.pdf
The principle, that I already applied on other sensors, is that they all start with the same address. You then have to activate one, change it's address, then activate the next one, change it's address, etc.
Unfortunately, ST Microelectronics didn't publish the list of the I2C registers, so I have to use their API to control multiple sensors. The document linked above explains how to do so. Among other things, it specifies :
In vl53L0x_platform.h API file
• Set VL53L0x_SINGLE_DEVICE_DRIVER macro to 0 so that API implementation will
be automatically adapted to a multi-device context.
I looked everywhere in the API folder, I was not able to find any reference to a VL53L0x_SINGLE_DEVICE_DRIVER macro. Setting it to 0 won't change anything, as this string is not present anywhere in the API files. Did anyone run into a similar problem ?
I'm working on the same thing. It seems that you're further ahead than I am. However, putting this in my while(1) loop seems to make both the sensors work.
ResetAndDetectSensor(0);
TimeStamp_Reset();
The guide says that in order to use all the sensors simultaneously, you need to pull the XSHUT pin high for all the sensors, reset the timestamp and then pick up the sensor which actually detects something.
I am trying to associate the OpenCL GPU devices with NVAPI devices which I get using NvAPI_EnumPhysicalGPUs in the Multi-GPU system.
The thing is, I can use clGetDeviceInfo with CL_DEVICE_VENDOR_ID which is always unique and it is the best way, and I can retrieve the vendor from the NvAPI_SYS_GetChipSetInfo. But it is not associated with the NvPhysicalGpuHandle which I get from NvAPI_EnumPhysicalGPUs. Is there any way to associate this?
Of course, I can just use name, but this is not good.
There is a way to do it. In OpenCL there is a poor documented feature for some reason. You need to call clGetDeviceInfo with constant 0x4008 and it will give you the bus id for the following device handle.
cl_uint busID;
clGetDeviceInfo(device,0x4008,sizeof(cl_uint), &busID,NULL);
printf("%d",busID);
On NvApi side use NvAPI_GPU_GetBusId. Then you can associate the handles by comapring the buses.