Rename columns with multiple value dictionary - arrays

I have a dataframe and would like to rename the columns based on a dictionary with multiple values per key. The dictionary key has the desired column name, and the values hold possible old column names. The column names have no pattern.
import pandas as pd
column_dict = {'a':['col_a','col_1'], 'b':['col_b','col_2'], 'c':'col_c','col_3']}
df = pd.DataFrame([(1,2.,'Hello'), (2,3.,"World")], columns=['col_1', 'col_2', 'col_3'])
Function to replace text with key
def replace_names(text, dict):
for key in dict:
text = text.replace(dict[key],key)
return text
replace_names(df.columns.values,column_dict)
Gives an error when called on column names
AttributeError: 'numpy.ndarray' object has no attribute 'replace'
Is there another way to do this?

You can use df.rename(columns=...) if you supply a dict which maps old column names to new column names:
import pandas as pd
column_dict = {'a':['col_a','col_1'], 'b':['col_b','col_2'], 'c':['col_c','col_3']}
df = pd.DataFrame([(1,2.,'Hello'), (2,3.,"World")], columns=['col_1', 'col_2', 'col_3'])
col_map = {col:key for key, cols in column_dict.items() for col in cols}
df = df.rename(columns=col_map)
yields
a b c
0 1 2.0 Hello
1 2 3.0 World

Related

I am struggling to select a string of data from a database table and print it as a variable

I've been trying to learn how to use sqlite3 for python 3.10 and I can't find any explanation of how I'm supposed to grab saved data From a database and insert it into a variable.
I'm attempting to do that myself in this code but It just prints out
<sqlite3.Cursor object at 0x0000018E3C017AC0>
Anyone know the solution to this?
My code is below
import sqlite3
con = sqlite3.connect('main.db')
cur = con.cursor()
#Create a table called "Datatable" if it does not exist
cur.execute('''CREATE TABLE IF NOT EXISTS datatable
(Name PRIMARY KEY, age, pronouns) ''')
# The key "PRIMARY KEY" after Name disallow's information to be inserted
# Into the table twice in a row.
name = 'TestName'#input("What is your name? : ")
age = 'TestAge'#input("What is your age? : ")
def data_entry():
cur.execute("INSERT INTO datatable (name, age)")
con.commit
name = cur.execute('select name from datatable')
print(name)
Expected result from Print(name) : TestName
Actual result : <sqlite3.Cursor object at 0x00000256A58B7AC0>
The execute statement fills the cur object with data, but then you need to get the data out:
rows = cur.fetchall()
for row in rows:
print(row)
You can read more here: https://www.sqlitetutorial.net/sqlite-python/sqlite-python-select/

How can I loop through a vector data with attribute table so that I create a new field by appending values from an existing field?

See how the attribute table should be if the algorithm worksHow can I loop through vector data with an attribute table so that I create a new field by appending values from an already existing field in the vector data? So the logic I want to have is this:
I want that when I run the script, a search for the expression “Out” is done in the field index with the name “fa_to”. If found, the corresponding row value in the ‘fa_id is then inserted into the newly created field named ‘Bury_ID. It does this until it exhausts the entire row value for ‘fa_to.
My Code:
fn = ‘C:/PLUGINS1/p3/now3.shp’
tile_layer_used = iface.addVectorLayer(fn, ”, ‘ogr’)
fc = tile_layer_used.featureCount()
fa_to = tile_layer_used.fields().indexFromName(‘Tile_TO’) #field has all integers and a row named ‘Out’
fa_id = tile_layer_used.fields().indexFromName(‘Tile_ID’) #field id with numbers 1 to 50
# fa_to Tally with the fa_id
my_field_name = ‘Bury_ID’
tileorder = {}
with edit(tile_layer_used):
tile_layer_used.addAttribute(QgsField(my_field_name, QVariant.String, len=5, prec=0))
tile_layer_used.updateFields()
for f in tile_layer_used.getFeatures():
if fa_to == “Out”:
tileorder = fa_id
else:
tileorder = fa_id
f[my_field_name] = tileorder
tile_layer_used.updateFeature(f)
my output creates the new field named ‘Bury_ID’ but only has a single number all through instead of the corresponding field ids from ‘fa_id’.

Error while retrieving data from SQL Server using pyodbc python

My table data has 5 columns and 5288 rows. I am trying to read that data into a CSV file adding column names. The code for that looks like this :
cursor = conn.cursor()
cursor.execute('Select * FROM classic.dbo.sample3')
rows = cursor.fetchall()
print ("The data has been fetched")
dataframe = pd.DataFrame(rows, columns =['s_name', 't_tid','b_id', 'name', 'summary'])
dataframe.to_csv('data.csv', index = None)
The data looks like this
s_sname t_tid b_id name summary
---------------------------------------------------------------------------
db1 001 100 careie hello this is john speaking blah blah blah
It looks like above but has 5288 such rows.
When I try to execute my code mentioned above it throws an error saying :
ValueError: Shape of passed values is (5288, 1), indices imply (5288, 5)
I do not understand what wrong I am doing.
Use this.
dataframe = pd.read_sql('Select * FROM classic.dbo.sample3',con=conn)
dataframe.to_csv('data.csv', index = None)

Csv file to a Lua table and access the lines as new table or function()

Currently my code have simple tables containing the data needed for each object like this:
infantry = {class = "army", type = "human", power = 2}
cavalry = {class = "panzer", type = "motorized", power = 12}
battleship = {class = "navy", type = "motorized", power = 256}
I use the tables names as identifiers in various functions to have their values processed one by one as a function that is simply called to have access to the values.
Now I want to have this data stored in a spreadsheet (csv file) instead that looks something like this:
Name class type power
Infantry army human 2
Cavalry panzer motorized 12
Battleship navy motorized 256
The spreadsheet will not have more than 50 lines and I want to be able to increase columns in the future.
Tried a couple approaches from similar situation I found here but due to lacking skills I failed to access any values from the nested table. I think this is because I don't fully understand how the tables structure are after reading each line from the csv file to the table and therefore fail to print any values at all.
If there is a way to get the name,class,type,power from the table and use that line just as my old simple tables, I would appreciate having a educational example presented. Another approach could be to declare new tables from the csv that behaves exactly like my old simple tables, line by line from the csv file. I don't know if this is doable.
Using Lua 5.1
You can read the csv file in as a string . i will use a multi line string here to represent the csv.
gmatch with pattern [^\n]+ will return each row of the csv.
gmatch with pattern [^,]+ will return the value of each column from our given row.
if more rows or columns are added or if the columns are moved around we will still reliably convert then information as long as the first row has the header information.
The only column that can not move is the first one the Name column if that is moved it will change the key used to store the row in to the table.
Using gmatch and 2 patterns, [^,]+ and [^\n]+, you can separate the string into each row and column of the csv. Comments in the following code:
local csv = [[
Name,class,type,power
Infantry,army,human,2
Cavalry,panzer,motorized,12
Battleship,navy,motorized,256
]]
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
Here is how you can load a csv using the I/O library:
-- Example of how to load the csv.
path = "some\\path\\to\\file.csv"
local f = assert(io.open(path))
local csv = f:read("*all")
f:close()
Alternative you can use io.lines(path) which would take the place of csv:gmatch("[^\n]+") in the for loop sections as well.
Here is an example of using the resulting table:
-- print table out
print("items = {")
for name, item in pairs(items) do
print(" " .. name .. " = { ")
for field, value in pairs(item) do
print(" " .. field .. " = ".. value .. ",")
end
print(" },")
end
print("}")
The output:
items = {
Infantry = {
type = human,
class = army,
power = 2,
},
Battleship = {
type = motorized,
class = navy,
power = 256,
},
Cavalry = {
type = motorized,
class = panzer,
power = 12,
},
}

Primary keys with Apache Spark

I am having a JDBC connection with Apache Spark and PostgreSQL and I want to insert some data into my database. When I use append mode I need to specify id for each DataFrame.Row. Is there any way for Spark to create primary keys?
Scala:
If all you need is unique numbers you can use zipWithUniqueId and recreate DataFrame. First some imports and dummy data:
import sqlContext.implicits._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructType, StructField, LongType}
val df = sc.parallelize(Seq(
("a", -1.0), ("b", -2.0), ("c", -3.0))).toDF("foo", "bar")
Extract schema for further usage:
val schema = df.schema
Add id field:
val rows = df.rdd.zipWithUniqueId.map{
case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq)}
Create DataFrame:
val dfWithPK = sqlContext.createDataFrame(
rows, StructType(StructField("id", LongType, false) +: schema.fields))
The same thing in Python:
from pyspark.sql import Row
from pyspark.sql.types import StructField, StructType, LongType
row = Row("foo", "bar")
row_with_index = Row(*["id"] + df.columns)
df = sc.parallelize([row("a", -1.0), row("b", -2.0), row("c", -3.0)]).toDF()
def make_row(columns):
def _make_row(row, uid):
row_dict = row.asDict()
return row_with_index(*[uid] + [row_dict.get(c) for c in columns])
return _make_row
f = make_row(df.columns)
df_with_pk = (df.rdd
.zipWithUniqueId()
.map(lambda x: f(*x))
.toDF(StructType([StructField("id", LongType(), False)] + df.schema.fields)))
If you prefer consecutive number your can replace zipWithUniqueId with zipWithIndex but it is a little bit more expensive.
Directly with DataFrame API:
(universal Scala, Python, Java, R with pretty much the same syntax)
Previously I've missed monotonicallyIncreasingId function which should work just fine as long as you don't require consecutive numbers:
import org.apache.spark.sql.functions.monotonicallyIncreasingId
df.withColumn("id", monotonicallyIncreasingId).show()
// +---+----+-----------+
// |foo| bar| id|
// +---+----+-----------+
// | a|-1.0|17179869184|
// | b|-2.0|42949672960|
// | c|-3.0|60129542144|
// +---+----+-----------+
While useful monotonicallyIncreasingId is non-deterministic. Not only ids may be different from execution to execution but without additional tricks cannot be used to identify rows when subsequent operations contain filters.
Note:
It is also possible to use rowNumber window function:
from pyspark.sql.window import Window
from pyspark.sql.functions import rowNumber
w = Window().orderBy()
df.withColumn("id", rowNumber().over(w)).show()
Unfortunately:
WARN Window: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
So unless you have a natural way to partition your data and ensure uniqueness is not particularly useful at this moment.
from pyspark.sql.functions import monotonically_increasing_id
df.withColumn("id", monotonically_increasing_id()).show()
Note that the 2nd argument of df.withColumn is monotonically_increasing_id() not monotonically_increasing_id .
I found the following solution to be relatively straightforward for the case where zipWithIndex() is the desired behavior, i.e. for those desirng consecutive integers.
In this case, we're using pyspark and relying on dictionary comprehension to map the original row object to a new dictionary which fits a new schema including the unique index.
# read the initial dataframe without index
dfNoIndex = sqlContext.read.parquet(dataframePath)
# Need to zip together with a unique integer
# First create a new schema with uuid field appended
newSchema = StructType([StructField("uuid", IntegerType(), False)]
+ dfNoIndex.schema.fields)
# zip with the index, map it to a dictionary which includes new field
df = dfNoIndex.rdd.zipWithIndex()\
.map(lambda (row, id): {k:v
for k, v
in row.asDict().items() + [("uuid", id)]})\
.toDF(newSchema)
For anyone else who doesn't require integer types, concatenating the values of several columns whose combinations are unique across the data can be a simple alternative. You have to handle nulls since concat/concat_ws won't do that for you. You can also hash the output if the concatenated values are long:
import pyspark.sql.functions as sf
unique_id_sub_cols = ["a", "b", "c"]
df = df.withColumn(
"UniqueId",
sf.md5(
sf.concat_ws(
"-",
*[
sf.when(sf.col(sub_col).isNull(), sf.lit("Missing")).otherwise(
sf.col(sub_col)
)
for sub_col in unique_id_sub_cols
]
)
),
)

Resources