I want to create multiple arrays based on multiple dataframes.
I have 4 different dataframes (df, df1, df2, df3) and want to create all_points (all_points, all_points1, all_points2, all_points3) and dm for each dataframe (dm, dm1, dm2, dm3, dm4)
Is it possible to do this within a funtion?
all_points = df[['lat', 'lng']].values
dm = scipy.spatial.distance.cdist(all_points,
all_points,
get_distance)
all_points1 = df1[['lat', 'lng']].values
dm1 = scipy.spatial.distance.cdist(all_points1,
all_points1,
get_distance)
I tried something like this but it is not working:
def b(all_points):
all_points = df[['lat', 'lng']].values
return all_points
all_points = b(df)
all_points1 = b(df1)
...
I am happy to receive an answer!
Related
I have a pandas dataframe with 20k rows containing 2 columns named English, te. Changed the English column name to en. Trying to split the dataset into train, validation and test. And, I want to convert that dataset into
raw_datasets
the output i'm expecting is
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 18000
})
validation: Dataset({
features: ['translation'],
num_rows: 1000
})
test: Dataset({
features: ['translation'],
num_rows: 1000
})
})
I'm trying to write a code like raw_datasets["train"][0], then it should return output like below
{'translation': {'en': 'Membership of Parliament: see Minutes',
'to': 'Componenţa Parlamentului: a se vedea procesul-verbal'}}
The data must be in DatasetDict, similar to if we load data from huggingface like dataset DatasetDict type. Below is the code i've written but it's not working
import pandas as pd
from collections import namedtuple
Dataset = namedtuple('Dataset', ['features', 'num_rows'])
DatasetDict = namedtuple('DatasetDict', ['train', 'validation', 'test'])
def create_dataset_dict(df):
# Rename the column
df = df.rename(columns={'English': 'en'})
# Split the data into train, validation and test
train_df = df.iloc[:18000, :]
validation_df = df.iloc[18000:19000, :]
test_df = df.iloc[19000:, :]
# Create the dataset dictionaries
train = Dataset(features=['translation'], num_rows=18000)
validation = Dataset(features=['translation'], num_rows=1000)
test = Dataset(features=['translation'], num_rows=1052)
# Create the final dataset dictionary
datasets = DatasetDict(train=train, validation=validation, test=test)
return datasets
def preprocess_dataset(df):
df = df.rename(columns={'English': 'en'})
train_df = df.iloc[:18000, :]
validation_df = df.iloc[18000:19000, :]
test_df = df.iloc[19000:, :]
train_dict = [{'translation': {'en': row['en'], 'te': row['te']}} for _, row in train_df.iterrows()]
validation_dict = [{'translation': {'en': row['en'], 'te': row['te']}} for _, row in validation_df.iterrows()]
test_dict = [{'translation': {'en': row['en'], 'te': row['te']}} for _, row in test_df.iterrows()]
return DatasetDict(train=train_dict, validation=validation_dict, test=test_dict)
df = pd.read_csv('eng-to-te.csv')
raw_datasets = preprocess_dataset(df)
The above code is not working. Can anyone help me with this?
I am new to r and I am trying to get my mind around the apply function. So far I managed to run my anovas for all the the variables on my data and I got the pairwise comparison.
varlist <- names(dt)[5:length(dt)]
# loop
models <- lapply(X = varlist,
FUN = function(t) lm(formula = paste0("`", t, "` ~ block+irrigation*genotype"), data = dt))
#Name the list of models to the column name
names(models) = varlist
## apply anova to each model stored in the list, models
lapply(models, anova)
#marginal-means-all-variable}
res.model1 <- lapply(models, function(x) pairs(emmeans(x, ~genotype:irrigation)))
res.model1
So far so good, now I want to create a compact letter list so I can use to plot it. Previously I used the following but I can't work out how to apply an lapply function to the following code
CLD = cld(res.model1,
alpha=0.05,
Letters=letters,
adjust="tukey")
I use the CLD data to create graphs
I manage to get the letters with the following code but then I am not getting the full anova table.
tx <- with(dt, interaction(irrigation, genotype)) # determining the factors
model2 <- lapply(varlist, function(x) {
lm(substitute(i~block+tx, list(i = as.name(x))), data = dt)}) # using the factors already in "tx"
lapply(model2, anova)
letters = lapply(model2, function(m) HSD.test((m), "tx", alpha = 0.05, group = TRUE, console = TRUE))
Any suggestions to achieve what I need.
Thank you
When creating an ALS model, we can extract a userFactors DataFrame and an itemFactors DataFrame. These DataFrames contain a column with an Array.
I would like to generate some random data and union it to the userFactors DataFrame.
Here is my code:
val df1: DataFrame = Seq((123, 456, 4.0), (123, 789, 5.0), (234, 456, 4.5), (234, 789, 1.0)).toDF("user", "item", "rating")
val model1 = (new ALS()
.setImplicitPrefs(true)
.fit(df1))
val iF = model1.itemFactors
val uF = model1.userFactors
I then create a random DataFrame using a VectorAssembler with this function:
def makeNew(df: DataFrame, rank: Int): DataFrame = {
var df_dummy = df
var i: Int = 0
var inputCols: Array[String] = Array()
for (i <- 0 to rank) {
df_dummy = df_dummy.withColumn("feature".concat(i.toString), rand())
inputCols = inputCols :+ "feature".concat(i.toString)
}
val assembler = new VectorAssembler()
.setInputCols(inputCols)
.setOutputCol("userFeatures")
val output = assembler.transform(df_dummy)
output.select("user", "userFeatures")
}
I then create the DataFrame with new user IDs and add the random vectors and bias:
val usersDf: DataFrame = Seq(567), (678)).toDF("user")
var usersFactorsNew: DataFrame = makeNew(usersDf, 20)
The problem arises when I union the two DataFrames.
usersFactorsNew.union(uF) produces the error:
org.apache.spark.sql.AnalysisException: Union can only be performed on tables with the compatible column types. struct<type:tinyint,size:int,indices:array<int>,values:array<double>> <> array<float> at the second column of the second table;;
If I print the schema, the uF DataFrame has a feature vector of type Array[Float] and the usersFactorsNew DataFrame as a feature vector of type Vector.
My question is how to change the type of the Vector to an Array in order to perform the union.
I tried writing this udf with little success:
val toArr: org.apache.spark.ml.linalg.Vector => Array[Double] = _.toArray
val toArrUdf = udf(toArr)
Perhaps the VectorAssembler is not the best option for this task. However, at the moment, it is the only option I have found. I would love to get some recommendations for something better.
Instead of creating a dummy dataframe and using VectorAssembler to generate a random feature vector, you can simply use an UDF directly. The userFactors from the ALS model will return an Array[Float] so the output from the UDF should match that.
val createRandomArray = udf((rank: Int) => {
Array.fill(rank)(Random.nextFloat)
})
Note that this will give numbers in the interval [0.0, 1.0] (same as the rand() used in the question), if other numbers are required, modify as fit.
Using a rank of 3 and the userDf:
val usersFactorsNew = usersDf.withColumn("userFeatures", createRandomArray(lit(3)))
will give a dataframe as follows (of course with random feature values)
+----+----------------------------------------------------------+
|user|userFeatures |
+----+----------------------------------------------------------+
|567 |[0.6866711267486822,0.7257031656127676,0.983562255688249] |
|678 |[0.7013908820314967,0.41029552817665327,0.554591149586789]|
+----+----------------------------------------------------------+
Joining this dataframe with the uF dataframe should now be possible.
The reason the UDF did not work should be due to it being an Array[Double] while you need anArray[Float]for theunion. It should be possible to fix with amap(_.toFloat)`.
val toArr: org.apache.spark.ml.linalg.Vector => Array[Float] = _.toArray.map(_.toFloat)
val toArrUdf = udf(toArr)
All of your process are all correct. Even the udf function is working successfully. All you need to do is change the last part of makeNew function as
def makeNew(df: DataFrame, rank: Int): DataFrame = {
var df_dummy = df
var i: Int = 0
var inputCols: Array[String] = Array()
for (i <- 0 to rank) {
df_dummy = df_dummy.withColumn("feature".concat(i.toString), rand())
inputCols = inputCols :+ "feature".concat(i.toString)
}
val assembler = new VectorAssembler()
.setInputCols(inputCols)
.setOutputCol("userFeatures")
val output = assembler.transform(df_dummy)
output.select(col("id"), toArrUdf(col("userFeatures")).as("features"))
}
and you should be perfect to go so that when you do (I created userDf with id column and not user column)
val usersDf: DataFrame = Seq((567), (678)).toDF("id")
var usersFactorsNew: DataFrame = makeNew(usersDf, 20)
usersFactorsNew.union(uF).show(false)
you should be getting
+---+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|id |features |
+---+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|567|[0.8259185719733708, 0.327713892339658, 0.049547223031371046, 0.056661808506210054, 0.5846626163454274, 0.038497936270104005, 0.8970865088803417, 0.8840660648882804, 0.837866669938156, 0.9395263094918058, 0.09179528484355126, 0.4915430644129799, 0.11083447052043116, 0.5122858182953718, 0.4302683812966408, 0.3862741815833828, 0.6189322403095068, 0.3000371006293433, 0.09331299668168902, 0.7421838728601371, 0.855867963988993]|
|678|[0.7686514248005568, 0.5473580740023187, 0.072945344124282, 0.36648594574355287, 0.9780202082328863, 0.5289221651923784, 0.3719451099963028, 0.2824660794505932, 0.4873197501260199, 0.9364676464120849, 0.011539929543513794, 0.5240615794930654, 0.6282546154521298, 0.995256022569878, 0.6659179561266975, 0.8990775317754092, 0.08650071017556926, 0.5190186149992805, 0.056345335742325475, 0.6465357505620791, 0.17913532817943245] |
|123|[0.04177388548851013, 0.26762014627456665, -0.19617630541324615, 0.34298020601272583, 0.19632814824581146, -0.2748605012893677, 0.07724890112876892, 0.4277132749557495, 0.1927199512720108, -0.40271613001823425] |
|234|[0.04139673709869385, 0.26520395278930664, -0.19440513849258423, 0.3398836553096771, 0.1945556253194809, -0.27237895131111145, 0.07655145972967148, 0.42385169863700867, 0.19098000228405, -0.39908021688461304] |
+---+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I am new to both Spark and Scala, and I'm trying to practice the join command in Spark.
I have two csv files:
Ads.csv is
5de3ae82-d56a-4f70-8738-7e787172c018,AdProvider1
f1b6c6f4-8221-443d-812e-de857b77b2f4,AdProvider2
aca88cd0-fe50-40eb-8bda-81965b377827,AdProvider1
940c138a-88d3-4248-911a-7dbe6a074d9f,AdProvider3
983bb5e5-6d5b-4489-85b3-00e1d62f6a3a,AdProvider3
00832901-21a6-4888-b06b-1f43b9d1acac,AdProvider1
9a1786e1-ab21-43e3-b4b2-4193f572acbc,AdProvider1
50a78218-d65a-4574-90de-0c46affbe7f3,AdProvider5
d9bb837f-c85d-45d4-95f2-97164c62aa42,AdProvider4
611cf585-a8cf-43e9-9914-c9d1dc30dab5,AdProvider1
Impression.csv is:
5de3ae82-d56a-4f70-8738-7e787172c018,Publisher1
f1b6c6f4-8221-443d-812e-de857b77b2f4,Publisher2
aca88cd0-fe50-40eb-8bda-81965b377827,Publisher1
940c138a-88d3-4248-911a-7dbe6a074d9f,Publisher3
983bb5e5-6d5b-4489-85b3-00e1d62f6a3a,Publisher3
00832901-21a6-4888-b06b-1f43b9d1acac,Publisher1
9a1786e1-ab21-43e3-b4b2-4193f572acbc,Publisher1
611cf585-a8cf-43e9-9914-c9d1dc30dab5,Publisher1
I want to join them with the first ID as the key and two values.
So I read them in like this:
val ads = sc.textFile("ads.csv")
ads: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at <console>:21
val impressions = sc.textFile("impressions.csv")
impressions: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[3] at textFile at <console>:21
Ok, so I have to make key,value pairs:
val adPairs = ads.map(line => line.split(","))
val impressionPairs = impressions.map(line => line.split(","))
res11: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[6] at map at <console>:23
res13: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[7] at map at <console>:23
But I can't join them:
val result = impressionPairs.join(adPairs)
<console>:29: error: value join is not a member of org.apache.spark.rdd.RDD[Array[String]]
val result = impressionPairs.join(adPairs)
Do I need to convert the pairs into another format?
You are almost there, but what you need is to transform the Array[String] into key-value pairs, like this:
val adPairs = ads.map(line => {
val substrings = line.split(",")
(substrings(0), substrings(1))
})
(and the same for impressionPairs)
That will give you rdds of type RDD[(String, String)] which can then be joined :)
The code I'm using imports data from multiple files and saves them into an array of cells, the code is as follows:
[FileName,PathName,FilterIndex] = uigetfile('*.txt*','MultiSelect','on');
numfiles = size(FileName,2);
FileData= cell(1,numfiles);
for ii = 1:numfiles
FileName{ii};
A=[];
entirefile =fullfile(PathName,FileName{ii});
fid = fopen(entirefile);
tline = fgets(fid);
while ischar(tline)
parts = textscan(tline, '%f;');
if numel(parts{1}) > 0
A = [ A ; parts{:}' ];
end
tline = fgets(fid);
end
fclose(fid);
FileData{ii} = A;
A = FileData{ii};
X = A(:,1);
Y = A(:,5);
DataToUse = [X,Y];
end
Now my issue is I want to use the first DataToUse created by the loop, which will be data from the first file, seperatley to the other files but I can not issolate it. I have tried DataToUse(1), DataToUse(1,1) and DataToUse(:,[1,2]) but none are working for me. An example of the type of data would be:
DataToUse=
0.0762 0.0271
0.0763 0.2671
0.0764 0.4079
0.0765 0.0510
0.0766 0.0087
0.0767 0.0099
0.0768 0.0067
0.0769 0.0047
0.0770 0.0047
0.0771 0.0349
0.0772 0.2094
0.0773 0.2740
0.0774 0.0294
0.0775 0.0100
0.0776 0.0159
I have different numbers of this kind of data depending on how many files are selected but I would like to only use the first initially and use the others later. Anybody know how I can go about doing this? Many thanks in advance
The solution is to use cell arrays, like so:
DataToUse{ii} = [X, Y]
To get the desired output put this after your for-loop:
firstLoopXY = DataToUse{1}
Enjoy!