Hello everyone I am writing my bot for discord, I get the values from google sheets, but they are not displayed beautifully. How can I align them so that the name is under the name and the number is under the numbers
Here's how it turns out https://i.stack.imgur.com/Gdquh.png
And it should be like this https://i.stack.imgur.com/UAPg2.png
spreadsheet_id = 'id'
result = service.spreadsheets().values().get(spreadsheetId=spreadsheet_id, range='A1:C15', majorDimension='ROWS').execute()
values = result.get('values', [])
embed = discord.Embed(description="\n".join([x[0] + " " + x[2] for x in values]))
result2 = service.spreadsheets().values().get(spreadsheetId=spreadsheet_id, range='A16:C28', majorDimension='ROWS').execute()
values2 = result2.get('values', [])
embed2 = discord.Embed(description="\n".join([x[0] + " " + x[2] for x in values2]))
I've been putting my data into pandas DataFrames, so I've been using to_markdown https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_markdown.html and printing it within code blocks.
import pandas as pd
array = pd.DataFrame(values)
embed = discord.Embed(description='```' + array.to_markdown(index=False) + '```')
Related
I'm trying to write the array to a single column, but despite everything seems right to me, it keeps throwing me an error:
Here's the piece of code:
function getGFTickersData() {
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("LIST OF STOCKS");
var tickerRng = ss.getRange(2, 1, ss.getLastRow(), 1).getValues();
//var TDSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("TickersData");
var TDSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Sheet10");
var tickerArr = [];
for (var b = 0; b < tickerRng.length; b++) {
var tickerToArr = [tickerRng[b]];
if (tickerToArr != '') {
var gFinFormula = "=query(googlefinance(" + '"' + tickerToArr + '"' + ",'all shares'!A4,'all shares'!D3,'all shares'!D4,'all shares'!D5)," + '"' + "select *" + '"' + ",1)";
var repeated = [].concat(... new Array(105).fill(tickerToArr))
tickerArr.push(repeated)
}
}
Logger.log(tickerArr[0]);
TDSheet.getRange(TDSheet.getLastRow() + 1, 1, tickerArr.length, 1).setValues(tickerArr);
}
Appreciate any pointers!
From your following replying,
The array is composed of about 200k element, each showing in the array 105 times. I want to write all of them, one on the top of the other in a single column.
How about the following modification?
From:
tickerArr.push(repeated)
To:
tickerArr = tickerArr.concat(repeated);
References:
push()
concat()
I'm dynamically generating queries from 11 different tables in SQL Server and storing that into S3 CSV file.
However, when I store null integer fields in CSV it converts them to float so when I'm doing a copy command it returns an error.
I really need to avoid that. is there and option for that?
for object in table_list:
if args.load_type == "full":
query_load = object["query_full"]
else:
query_load = object["query_active"]
df = pd.read_sql_query(query_load, sql_server_conn)
df = df.replace(",", " ", regex=True)
df = df.replace("\n", " ", regex=True)
#print(df)
#df = df * 1
#print(df.dtypes)
#print(df.info())
df = df.assign(extraction_dttm=currentdate)
csv_buffer = StringIO()
df.to_csv(csv_buffer, index=False)
folder_name = "{}".format(object["lake_table_name"])
file_name = "{}_{}.csv".format(object["lake_table_name"], currentdate.strftime("%Y%m%d"))
full_path_to_file = DATALAKE_PATH + "/" + folder_name + "/" + file_name
# print("{} - Storing files in {} ... ".format(dt.utcnow(), datalake_bucket))
s3_resource.Object(datalake_bucket, full_path_to_file).put(Body=csv_buffer.getvalue())
so I'm trying to write a script to import text files, based on the file names, and the tab names.
The txt files are game server log extracts, 3 different types of logs.
I have managed to get the script to import the text based on the file names (types), but having issues running some simple formatting at the end of a loop before moving to the next type or log.
Basically, the script imports all the text from the log files parses it as a CSV and dumps it to the sheet. The sheet removes duplicates, splits the text into columns etc, they should move on to the next type of log (and switch tab) - repeat.
I'm still picking scripting up so please be gentle....! Any help is greatly appreciated.
function ImportAll() {
var folderID = "my google drive foler"
var folder = DriveApp.getFolderById(folderID);
var ss, sRow, lRow, fRow, csvData, logFile, file, sheet =
SpreadsheetApp.getActive();
var data = [];
var logType = ["admin_", "kill_", "login_"]
var allSheets = ['Admin', 'Kill', 'Login']
for (var k = 0; k < logType.length; k++) {
var files = folder.searchFiles('title contains "' + logType[k] + '"');
while (files.hasNext()) {
file = files.next();
data.push(file.getName())
};
ss = sheet.getSheetByName(allsheets[k])
sRow = ss.getDataRange().getLastRow() + 1;
//Browser.msgBox(data[0])
for (var i = 0; i < data.length; i++) {
logFile = DriveApp.getFilesByName(data[i]).next();
csvData = Utilities.parseCsv(logFile.getBlob().getDataAsString());
fRow = ss.getDataRange().getLastRow() + 1;
ss.getRange(fRow, 1, csvData.length, csvData[0].length).setValues(csvData);
};
lRow = ss.getDataRange().getLastRow();
ss.getRange('A' + sRow + ':A' + lRow).removeDuplicates().activate();
ss.getRange('A' + sRow + ':A' + lRow).splitTextToColumns('-');
ss.getRange('B' + sRow + ':B' + lRow).splitTextToColumns(':');
ss.getRange('B:B').setNumberFormat('HH:mm:ss');
ss.getFilter().remove();
ss.getRange('A1:E' + lRow).createFilter();
var rng = sheet.getRange('D2:D' + lRow)
var rngV = rng.getValues();
var String = "";
for(var i=0;i<rngV.length;i++)
{
String = rngV[i].toString().replace(s, '')
rngV[i] = String // is working
}
rng.setValues(rngV) // NOT WORKING!!!!!!
//sheet.appendRow(data); //throws [L]JavaLang#****
}
};
}
I keep getting thrown "TypeError: Cannot call method "getDataRange" of null." errors, and I've tried a heap of different things to no avail.
import urllib2
import pandas as pd
from bs4 import BeautifulSoup
x = 0
i = 1
data = []
while (i < 13):
soup = BeautifulSoup(urllib2.urlopen(
'http://games.espn.com/ffl/tools/projections?&slotCategoryId=4&scoringPeriodId=%d&seasonId=2018&startIndex=' % i, +str(x)).read(), 'html')
tableStats = soup.find("table", ("class", "playerTableTable tableBody"))
for row in tableStats.findAll('tr')[2:]:
col = row.findAll('td')
try:
name = col[0].a.string.strip()
opp = col[1].a.string.strip()
rec = col[10].string.strip()
yds = col[11].string.strip()
dt = col[12].string.strip()
pts = col[13].string.strip()
data.append([name, opp, rec, yds, dt, pts])
except Exception as e:
pass
df = pd.DataFrame(data=data, columns=[
'PLAYER', 'OPP', 'REC', 'YDS', 'TD', 'PTS'])
df
i += 1
I have been working with a fantasy football program and I am trying to increment data over all weeks so I can create a dataframe for the top 40 players for each week.
I have been able to get it for any week of my choice by manually entering the week number in the PeriodId part of the url, but I am trying to programmatically increment it over each week to make it easier. I have tried using PeriodId='+ I +' and PeriodId=%d but I keep getting various errors about str and int concatenate and bad operands. Any suggestions or tips?
Try removing the comma between %i and str(x) to concatenate the strings and see if that helps.
soup = BeautifulSoup(urllib2.urlopen('http://games.espn.com/ffl/tools/projections?&slotCategoryId=4&scoringPeriodId=%d&seasonId=2018&startIndex='%i, +str(x)).read(), 'html')
should be:
soup = BeautifulSoup(urllib2.urlopen('http://games.espn.com/ffl/tools/projections?&slotCategoryId=4&scoringPeriodId=%d&seasonId=2018&startIndex='%i +str(x)).read(), 'html')
if you have problem concatenating or formatting URL please create variable instead write it one line with BeautifulSoup and urllib2.urlopen.
Use parenthesis to format with multiple value like "before %s is %s" % (1, 0)
url = 'http://games.espn.com/ffl/tools/projections?&slotCategoryId=4&scoringPeriodId=%s&seasonId=2018&startIndex=%s' % (i, x)
# or
#url = 'http://games.espn.com/ffl/tools/projections?&slotCategoryId=4&scoringPeriodId=%s&seasonId=2018&startIndex=0' % i
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html, 'html.parser')
Make the code sorter will not effect the performance.
I am trying to save a Array[String, Int] data into file. However, every time, it reports:
object not serializable
I also tried to combine the two columns into a string, and want to write it line by line, but it still report such error. The code is:
val fw = new PrintWriter(new File("/path/data_stream.txt"))
myArray.foreach(x => fw.write((x._1.toString + " " + x._2.toString + "\n").toByte
import java.nio.file._
val data = Array(("one", 1), ("two", 2), ("three", 3))
data.foreach(d => Files.write(Paths.get("/path/data_stream.txt"), (d._1 + " " + d._2 + "\n").getBytes, StandardOpenOption.CREATE, StandardOpenOption.APPEND))