GPS fix_type Value = 4 - dronekit-python

After looking through the logs of a recent test flight my craft reported a value of 4 for the variable fix_type of class dronekit.GPSInfo(eph, epv, fix_type, satellites_visible).
eph and epv had no value and satellites_visible varied between 9 and 12.
The flight was 30 minutes long. The GPS module is ublox gps + compass module.
Indoors I get fix_type 0 or 1 as expected, but out doors I get 3-4? I can find info on a 3D fix, but what does a 4D GPS fix mean?
How is this variable getting set in the source code?
class GPSInfo(object):
"""
Standard information about GPS.
If there is no GPS lock the parameters are set to ``None``.
:param Int eph: GPS horizontal dilution of position (HDOP).
:param Int epv: GPS vertical dilution of position (VDOP).
:param Int fix_type: 0-1: no fix, 2: 2D fix, 3: 3D fix
:param Int satellites_visible: Number of satellites visible.
.. todo:: FIXME: GPSInfo class - possibly normalize eph/epv? report fix type as string?
"""
def __init__(self, eph, epv, fix_type, satellites_visible):
self.eph = eph
self.epv = epv
self.fix_type = fix_type
self.satellites_visible = satellites_visible
def __str__(self):
return "GPSInfo:fix=%s,num_sat=%s" % (self.fix_type, self.satellites_visible)

Other UBLOX GPS code struct gpsData defines fix type as having the following values:
GNSSfix Type:
0: no fix
1: dead reckoning only
2: 2D-fix
3: 3D-fix
4: GNSS + dead reckoning combined,
5: time only fix
So the values 4 and 5 likely do not mean 4D and 5D fixes as you assume given the values for 2 and 3, and make sense given the scenario you describe.

Related

How can I obtain overall p-value in feglm model?

I am running on project to find associated factors of a certain blood test performed, let's says diabetes blood test for this post. The variables that I have are 1) year (2018, 2019, 2020), 2) gender (male, female, other), 3) clinic locations of individual clinics (metropolitan, regional, rural), 4) Age group (20-29,30-39,40-49, 50-59, 60-89yrs old). This data is clustered sample by medical clinic (clinic_id)
I tried survey, srvyr and fixest packages (cluster sample) and found that the of feglm of fixest package were very similar to those of stata.
I fit this model using fixest package using the following script;
model_tested <- feglm (tested ~ year + gender + clinic_location + age_group, data = tested_proportion, family = "binomial", se = "cluster", cluster= ~clinic_id)
I was able to obtain individual p-value like the followings;
pr (>
t
)
year2019
0.71101
year2020
0.00973
female
0.00000
other
0.08090
age20-29
0.00000
age30-39
0.00000
age40-49
0.39693
age50-59
0.00000
age60-80
0.00000
In glm, I can run Anova (or aov) test to obtain overall p-value of each variable, such as p-value of year, gender and age-group.
However, I cannot run anova(model_tested) and got error message that Anova test was not supported in feglm model.
I tried the following script to obtain overall-p value of each variable, using wald.test
p_overall_year <- aod:wald.test(sigma = vcov(model_tested), b= coef(model_tested), Term = 2:3)
p_overall_gender <- aod:wald.test(sigma = vcov(model_tested), b= coef(model_tested), Term = 4:5)
p_overall_gender <- aod:wald.test(sigma = vcov(model_tested), b= coef(model_tested), Term = 6:10)
My question is, are there better way to obtain overall-p values of each variable?
Also, These showed overall p-value of each group but it was somewhat different to those of stata that i obtained using script, testparm i(2018/2020).year, that showed results of adjusted wald test. For example, overall p-value of year in R was 0.0013 whereas that in Stata was 0.0891.
Any other methods that I can try in R to achieve similar overall p-value to Stata?

How to sampling Data Frame?

The goal is to subsample a data frame.
code:
# 1 date is in type datatime
dg.Yr_Mo_Dy = pd.to_datetime(dg.Yr_Mo_Dy, format='%Y%m%d')
# 2 date is in index
dg = dg.set_index(dg.Yr_Mo_Dy, drop = True)
# 3 to group by 10
dg.resample('1AS').mean().mean()
That gives:
RPT 14.847325
VAL 12.914560
ROS 13.299624
KIL 7.199498
SHA 11.667734
BIR 8.054839
DUB 11.819355
CLA 9.512047
MUL 9.543208
CLO 10.053566
BEL 14.550520
MAL 18.028763
dtype: float6
The code takes every 10 values the 10 intermediate values and the average.
Similarly, it is also possible to sum these 10 values by replacing mean() with sum().
However, what I want to do is not an average but a sampling. That is, to take all the values and only one without averaging, without summing the intermediate values.
For example, the data: 1,2,3,4,5,6.. sampled by 0.5 gives 2,4,6... et non 1.5,2.5,3.5,5.5...

Extraction of matched dataset from MatchThem

I have browsed almost all possible pages on the subject and I still can't find a way to extract a matched data dataset with the MatchThem package.
By analogy, MatchIt allows via the function match.data() to extract the dataset of matched data for example 3:1. Although MatchThem's complete() function is the equivalent, this function apparently does not allow to extract exclusively the imputed AND matched dataset.
Here is an example of multiple imputation with 3:1 matching from which I am trying to extract multiple matched datasets:
library(mice)
library(MatchThem)
#Multiple imputations
mids_object <- mice(data, maxit = 5, m=3, seed= 20211022, printFlag = F) # m=3 is voluntarily low for this example.
#Matching
mimids_object <- matchthem(primary_subtype ~ age + bmi + ps, data = mids_object, approach = "within" ,ratio= 3, method = "optimal")
#Details of matched data
print(mimids_object)
Printing | dataset: #1
A matchit object
method: Variable ratio 3:1 optimal pair matching
distance: Propensity score
- estimated with logistic regression
number of obs: 761 (original), 177 (matched)
target estimand: ATT
covariates: age, bmi, ps
#Extracting matched dataset
complete(mimids_object, action = "long") -> complete_mi_matched
#Summary of extracted dataset to check correct number of match
summary(complete_mi_matched$primary_subtype)
classic ADK SRC
702 59
It should show the matched proportion 3:1 with 177 matched (177 classic ADK and 59 SRC)
I am missing something. Thanks in advance for your help or suggestions.

Split array into chunks based on timestamp in Haskell

I have an array of records (custom data type) in Haskell which I want to aggregate based on a each records' timestamp. In very general terms each record looks like this:
data Record = Record { event :: String,
time :: Double,
from :: Int,
to :: Int
} deriving (Show, Eq)
I used a Double for the timestamp since that is the same format used in the tracefile.
And I parse them from a CSV file into an array of records: [Record]
Now I'm looking to get an approximation of instantaneous events / time. So I want to split the array into several arrays based on the timestamp (say. every 1 seconds) and then fold across each smaller array.
The problem is I can't figure out how to split an array based on the value of a record. Looking on Hoogle I found several functions like splitEvery and splitWhen, but I'm lost. I considered using splitWhen to break up the list when, say, (mod time 0.1) == 0, but even if that worked it would remove the elements it's splitting on (which I don't want to do).
I should note that the records are NOT evenly spaced in time. E.g. the timestamp on sequential records is not going to differ by a fixed amount.
I am more than willing to store the data in a different format if you can suggest one that would make this sort of work easier.
A quick sample of the data I'm parsing (from a ns2 simulation):
r 0.114 1 2 tcp 1000 ________ 2 1.0 5.0 0 2
r 0.240 1 2 tcp 1000 ________ 2 1.0 5.0 0 2
r 0.914 2 1 tcp 1000 ________ 2 5.0 1.0 0 3
If you have [Record] and you want to group them by a specific condition, you can use Data.List.groupBy. I'm assuming that for your time :: Double, 1 second is the base unit, so time = 1 is 1 second, time = 100 is 100 seconds, etc, so adjust this to whatever system you're actually using:
import Data.List
import Data.Function (on)
isInSameClockSecond :: Record -> Record -> Bool
isInSameClockSecond = (==) `on` (floor . time :: Record -> Integer)
-- The type signature is given for floor . time to remove any ambiguity
-- due to floor's polymorphic type signature.
groupBySameClockSecond :: [Record] -> [[Record]]
groupBySameClockSecond = groupBy isInSameClockSecond

Concatenating a numeric matrix with a 1xn cell

When trying the following concatenation:
for i=1:1:length(Open)
data(i,1) = Open(i);
data(i,2) = Close(i);
data(i,3) = High(i);
data(i,4) = Low(i);
data(i,5) = Volume(i);
data(i,6) = Adj_Close(i);
data(i,7) = cell2mat(dates(1,i));
end
Where all matrices but dates contain double values, and dates is a cell array with dates in the format '2001-01-01'. Running the code above, I get the following error:
??? Subscripted assignment dimension mismatch.
Error in ==> Test_Trades_part2 at 81
data(i,7) = cell2mat(dates(1,i));
The code above is tied to a master code which takes data from Yahoo Finance and then puts it in my SQL database.
A convenient way to store dates in completely numeric format is with datenum:
>> data(i,7) = datenum('2001-01-01');
>> disp(data(i,:))
0 0 0 0 0 0 730852
Whether this is useful to you depends on what you intend to do with the SQL database. Howeer, converting back to a string with MATLAB is straightforward with the datestr command:
>> datestr(730852,'yyyy-mm-dd')
ans =
2001-01-01
APPENDIX:
A serial date number represents a calendar date as the number of days that has passed since a fixed base date. In MATLAB, serial date number 1 is January 1, 0000.
Thank you all for the help!
I solved this issue using the following methodlogy (incorporating structs, should have thought of that .. stupid me):
data = [open, close_price, high, low, volume, closeadj];
s = struct('OpenPrice', data(:,1), 'ClosePrice', data(:,2), 'High', data(:,3), 'Low', data(:,4), 'Volume', data(:,5), 'Adj_Close', data(:,6), 'Dates', {dates});
This way, I enter all the values contained in the struct, circumventing the need to concatenate numeric and string matrices. Odd though, that is not allowed to have such matrices in on matrix; I would suppose that is the reason they created structs.

Resources