Discord getting user roles from exact guild and removing a specific role if conditions match - discord

I'm currently programming a discord bot in python that automatically removes a specific role called 'Onboarding' if a user profile is updated by choosing one or more roles other than this role.
When I run the code I don'tget any bugs but still it doesn't remove the role called 'Onboarding' if a user picks one or more of the other roles.
Can someone please give advice?
This is the code:
import discord
from discord.ext import commands
from discord.ext.commands import Bot
import os
my_secret = os.environ['DISCORD_TOKEN']
from discord.ext import commands
from keep_alive import keep_alive
bot = discord.Client(intents=discord.Intents.default())
#bot.event
async def on_member_update(before, after):
travel_role = discord.utils.get(after.guild.roles, name="๐Ÿ›ซ | travel")
onboarding_role = discord.utils.get(after.guild.roles, name="Onboarding")
art_role = discord.utils.get(after.guild.roles, name="๐ŸŽจ | art")
awareness_role = discord.utils.get(after.guild.roles, name="๐Ÿ”ฎ | awareness")
blogging_role = discord.utils.get(after.guild.roles, name="๐Ÿ“ฐ | blogging-vlogging")
books_role = discord.utils.get(after.guild.roles, name="๐Ÿ“š | books")
business_role = discord.utils.get(after.guild.roles, name="๐Ÿ’ผ | business-career")
comedy_role = discord.utils.get(after.guild.roles, name="๐Ÿ˜น | comedy-entertainment")
education_role = discord.utils.get(after.guild.roles, name="๐ŸŽ“ | education")
fashion_role = discord.utils.get(after.guild.roles, name="๐Ÿ’„ | fashion-beauty")
family_role = discord.utils.get(after.guild.roles, name="๐Ÿค— | family-relationships")
food_role = discord.utils.get(after.guild.roles, name="๐Ÿ” | food")
gaming_role = discord.utils.get(after.guild.roles, name="๐Ÿ•น | gaming")
health_role = discord.utils.get(after.guild.roles, name="๐Ÿ‹๏ธโ€โ™€๏ธ | health-fitness")
it_role = discord.utils.get(after.guild.roles, name="๐Ÿ–ฅ | it-electronics")
luxury_role = discord.utils.get(after.guild.roles, name="๐Ÿ‘ธ | luxury-lifestyle")
motorsports_role = discord.utils.get(after.guild.roles, name="๐ŸŽ | motorsports")
movies_role = discord.utils.get(after.guild.roles, name="๐Ÿฟ | movies-series")
music_role = discord.utils.get(after.guild.roles, name="๐ŸŽน | music")
poetry_role = discord.utils.get(after.guild.roles, name="๐Ÿงš | poetry")
sustainability_role = discord.utils.get(after.guild.roles, name="๐Ÿƒ | sustainability-environment")
if onboarding_role in after.guild.roles and (art_role in after.guild.roles or awareness_role in after.guild.roles or blogging_role in after.guild.roles or books_role in after.guild.roles or business_role in after.guild.roles or comedy_role in after.guild.roles or education_role in after.guild.roles or fashion_role in after.guild.roles or family_role in after.guild.roles or food_role in after.guild.roles or gaming_role in after.guild.roles or health_role in after.guild.roles or it_role in after.guild.roles or luxury_role in after.guild.roles or motorsports_role in after.guild.roles or movies_role in after.guild.roles or music_role in after.guild.roles or poetry_role in after.guild.roles or sustainability_role in after.guild.roles or travel_role in after.guild.roles):
await after.remove_roles(onboarding_role)
keep_alive()
bot.run(my_secret)
changing after.guild.roles to after.roles --> no result
otherwise cluelessโ€ฆ :(

It seems to be an Intents problem.
You are setting default intents: bot = discord.Client(intents=discord.Intents.default()), which doesn't include the members intent (needed so on_member_update() event get enabled).
The change should be as simple as:
intents = discord.Intents.default()
intents.members = True
bot = discord.Client(intents=intents)
Also, with your current code (if onboarding_role in after.guild.roles and ...) you are checking if the role exists in the guild (Guild.roles), not if the member has the role (Member.roles). You should change after.guild.roles by after.roles in the if statement (same after the and)

Related

flink table API not processing records

I read json data from Kafka and tried to process the data with flink table API.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql(
"create table inputTable(" +
"`src_ip` STRING," +
"`src_port` STRING," +
"`bytes_from_src` BIGINT," +
"`pkts_from_src` BIGINT," +
"`ts` TIMESTAMP(2) METADATA FROM 'timestamp'," +
"WATERMARK FOR ts AS ts" +
") WITH (" +
"'connector' = 'kafka'," +
"'topic' = 'test'," +
"'properties.bootstrap.servers' = 'localhost:9092'," +
"'properties.group.id' = 'testGroup'," +
"'scan.startup.mode' = 'earliest-offset'," +
"'format' = 'json'," +
"'json.fail-on-missing-field' = 'true'," +
"'json.ignore-parse-errors' = 'false'" +
")");
Table inputTable = tEnv.from("inputTable");
inputTable.printSchema();
inputTable.execute().print();
Table windowedTable = inputTable
.window(Tumble.over(lit(5).seconds()).on($("ts")).as("w"))
.groupBy($("w"), $("src_ip"))
.select($("w").start().as("window_start"),
$("src_ip"),
$("src_ip").count().as("src_ip_count"),
$("bytes_from_src").avg().as("bytes_from_src_mean")
);
windowedTable.execute().print();
There are 4 records in Kafka. The flink program prints out the schema info and the inputTable as the following:
Connected to the target VM, address: '127.0.0.1:62348', transport: 'socket'
(
`src_ip` STRING,
`src_port` STRING,
`bytes_from_src` BIGINT,
`pkts_from_src` BIGINT,
`ts` TIMESTAMP(2) *ROWTIME* METADATA FROM 'timestamp',
WATERMARK FOR `ts`: TIMESTAMP(2) AS `ts`
)
+----+--------------------------------+--------------------------------+----------------------+----------------------+-------------------------+
| op | src_ip | src_port | bytes_from_src | pkts_from_src | ts |
+----+--------------------------------+--------------------------------+----------------------+----------------------+-------------------------+
| +I | 44.38.5.31 | 53159 | 120 | 3 | 2021-08-13 14:59:56.00 |
| +I | 44.38.132.51 | 39409 | 100 | 2 | 2021-08-13 14:58:11.00 |
| +I | 44.38.4.44 | 56758 | 336 | 6 | 2021-08-13 14:59:14.00 |
| +I | 44.38.5.34 | 40001 | 80 | 2 | 2021-08-13 14:57:04.00 |
After that, nothing is printed out. The program did not exit. I am running the flink within IDEA. At this point, it seems like a black box. There is no output, and I do not know how to trace a flink program.
If I commented out the line inputTable.execute().print();, the schema info is printed out, but nothing after that and the program does not exit.
The flink version used is 1.14.2.
I believe those records are being processed, and are being added to the window. But event time windows are triggered by watermarks, and the watermark isn't becoming large enough to trigger the window. To get this to work you need to process an event with a timestamp past the end of the window -- i.e., 2021-08-13 15:00:00.00 or larger.
For debugging, the Flink web dashboard is helpful in situations like this. You can see if events are being processed, examine the watermarks, etc. See Flink webui when running from IDE for help in setting it up.

I tried to clone code from github, i did npm install and tried to run it but i got some errors, i think in the node modules,i don't understand why

So i tried to clone an AG-Grid project from github and
Here are some of the errors that i got
The code below is the output of the browser
TypeError: rowData.forEach is not a function
ClientSideNodeManager.recursiveFunction
D:/A projet web/Ag-Grid proj glob/Github Template/4/agGrid-crudOperation/node_modules/ag-grid-community/dist/ag-grid-community.cjs.js:40217
40214 | return;
40215 | }
40216 | var rowNodes = [];
> 40217 | rowData.forEach(function (dataItem) {
| ^ 40218 | var node = _this.createNode(dataItem, parent, level);
40219 | rowNodes.push(node);
40220 | });
View compiled
ClientSideNodeManager.setRowData
D:/A projet web/Ag-Grid proj glob/Github Template/4/agGrid-crudOperation/node_modules/ag-grid-community/dist/ag-grid-community.cjs.js:40060
40057 | // we add rootNode as the parent, however if using ag-grid-enterprise, the grouping stage
40058 | // sets the parent node on each row (even if we are not grouping). so setting parent node
40059 | // here is for benefit of ag-grid-community users
> 40060 | var result = this.recursiveFunction(rowData, this.rootNode, ClientSideNodeManager.TOP_LEVEL);
| ^ 40061 | if (this.doingLegacyTreeData) {
40062 | this.rootNode.childrenAfterGroup = result;
40063 | this.setLeafChildren(this.rootNode);
Does any one knows the way to fix it or how ?

convert JSON text string to Pandas, but each row cell ends up as an array of values inside

I manage to extract a time-series of prices from a web-portal. The data arrives in a json format, and I convert them into a pandas dataFrame.
Unfortunately, the data for the different bands come in a text string, and I can't seem to extract them out properly.
The below is the json data I extract
I convert them into a pandas dataframe using this code
data = pd.DataFrame(r.json()['prices'])
and get them like this
I need to extract (for example) the data in the column ClosePrice out, so that I can do data analysis and cleansing on them.
I tried using
data['closePrice'].str.split(',', expand=True).rename(columns = lambda x: "string"+str(x+1))
but it doesn't really work.
Is there any way to either
a) when I convert the json to dataFrame, such that the prices within the closePrice, bidPrice etc are extracted in individual columns OR
b) if they were saved in the dataFrame, extract the text strings within them, such that I can extract the prices (e.g. the bid, ask and lastTraded) within the text string?
A relatively brute force way, using links from other stackOverflow.
# load and extract the json data
s = requests.Session()
r = s.post(url + '/session', json=data)
loc = <url>
dat1 = s.get(loc)
dat1 = pd.DataFrame(dat1.json()['prices'])
# convert the object list into individual columns
dat2 = pd.DataFrame()
dat2[['bidC','askC', 'lastP']] = pd.DataFrame(dat1.closePrice.values.tolist(), index= dat1.index)
dat2[['bidH','askH', 'lastH']] = pd.DataFrame(dat1.highPrice.values.tolist(), index= dat1.index)
dat2[['bidL','askL', 'lastL']] = pd.DataFrame(dat1.lowPrice.values.tolist(), index= dat1.index)
dat2[['bidO','askO', 'lastO']] = pd.DataFrame(dat1.openPrice.values.tolist(), index= dat1.index)
dat2['tStamp'] = pd.to_datetime(dat1.snapshotTime)
dat2['volume'] = dat1.lastTradedVolume
get the equivalent below
Use pandas.json_normalize to extract the data from the dict
import pandas as pd
data = r.json()
# print(data)
{'prices': [{'closePrice': {'ask': 1.16042, 'bid': 1.16027, 'lastTraded': None},
'highPrice': {'ask': 1.16052, 'bid': 1.16041, 'lastTraded': None},
'lastTradedVolume': 74,
'lowPrice': {'ask': 1.16038, 'bid': 1.16026, 'lastTraded': None},
'openPrice': {'ask': 1.16044, 'bid': 1.16038, 'lastTraded': None},
'snapshotTime': '2018/09/28 21:49:00',
'snapshotTimeUTC': '2018-09-28T20:49:00'}]}
df = pd.json_normalize(data['prices'])
Output:
| | lastTradedVolume | snapshotTime | snapshotTimeUTC | closePrice.ask | closePrice.bid | closePrice.lastTraded | highPrice.ask | highPrice.bid | highPrice.lastTraded | lowPrice.ask | lowPrice.bid | lowPrice.lastTraded | openPrice.ask | openPrice.bid | openPrice.lastTraded |
|---:|-------------------:|:--------------------|:--------------------|-----------------:|-----------------:|:------------------------|----------------:|----------------:|:-----------------------|---------------:|---------------:|:----------------------|----------------:|----------------:|:-----------------------|
| 0 | 74 | 2018/09/28 21:49:00 | 2018-09-28T20:49:00 | 1.16042 | 1.16027 | | 1.16052 | 1.16041 | | 1.16038 | 1.16026 | | 1.16044 | 1.16038 | |

How to sum values of a struct in a nested array in a Spark dataframe?

This is in Spark 2.1, Given this input file:
`order.json
{"id":1,"price":202.30,"userid":1}
{"id":2,"price":343.99,"userid":1}
{"id":3,"price":399.99,"userid":2}
And the following dataframes:
val order = sqlContext.read.json("order.json")
val df2 = order.select(struct("*") as 'order)
val df3 = df2.groupBy("order.userId").agg( collect_list( $"order").as("array"))
df3 has the following content:
+------+---------------------------+
|userId|array |
+------+---------------------------+
|1 |[[1,202.3,1], [2,343.99,1]]|
|2 |[[3,399.99,2]] |
+------+---------------------------+
and structure:
root
|-- userId: long (nullable = true)
|-- array: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: long (nullable = true)
| | |-- price: double (nullable = true)
| | |-- userid: long (nullable = true)
Now assuming I am given df3:
I would like to compute sum of array.price for each userId, taking advantage of having the array per userId rows.
I would add this computation in a new column in the resulting dataframe. Like if I had done df3.withColumn( "sum", lit(0)), but with lit(0) replaced by my computation.
It would have assume to be straighforward, but I am stuck on both. I didnt find any way to access the array as whole do the computation per row (with a foldLeft for example).
I would like to compute sum of array.price for each userId, taking advantage of having the array
Unfortunately having an array works against you here. Neither Spark SQL nor DataFrame DSL provides tools that could be used directly to handle this task on array of an arbitrary size without decomposing (explode) first.
You can use an UDF:
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions.{col, udf}
val totalPrice = udf((xs: Seq[Row]) => xs.map(_.getAs[Double]("price")).sum)
df3.withColumn("totalPrice", totalPrice($"array"))
+------+--------------------+----------+
|userId| array|totalPrice|
+------+--------------------+----------+
| 1|[[1,202.3,1], [2,...| 546.29|
| 2| [[3,399.99,2]]| 399.99|
+------+--------------------+----------+
or convert to statically typed Dataset:
df3
.as[(Long, Seq[(Long, Double, Long)])]
.map{ case (id, xs) => (id, xs, xs.map(_._2).sum) }
.toDF("userId", "array", "totalPrice").show
+------+--------------------+----------+
|userId| array|totalPrice|
+------+--------------------+----------+
| 1|[[1,202.3,1], [2,...| 546.29|
| 2| [[3,399.99,2]]| 399.99|
+------+--------------------+----------+
As mentioned above you decompose and aggregate:
import org.apache.spark.sql.functions.{sum, first}
df3
.withColumn("price", explode($"array.price"))
.groupBy($"userId")
.agg(sum($"price"), df3.columns.tail.map(c => first(c).alias(c)): _*)
+------+----------+--------------------+
|userId|sum(price)| array|
+------+----------+--------------------+
| 1| 546.29|[[1,202.3,1], [2,...|
| 2| 399.99| [[3,399.99,2]]|
+------+----------+--------------------+
but it is expensive and doesn't use the existing structure.
There is an ugly trick you could use:
import org.apache.spark.sql.functions.{coalesce, lit, max, size}
val totalPrice = (0 to df3.agg(max(size($"array"))).as[Int].first)
.map(i => coalesce($"array.price".getItem(i), lit(0.0)))
.foldLeft(lit(0.0))(_ + _)
df3.withColumn("totalPrice", totalPrice)
+------+--------------------+----------+
|userId| array|totalPrice|
+------+--------------------+----------+
| 1|[[1,202.3,1], [2,...| 546.29|
| 2| [[3,399.99,2]]| 399.99|
+------+--------------------+----------+
but it is more a curiosity than a real solution.
Spark 2.4.0 and above
You can now use the AGGREGATE functionality.
df3.createOrReplaceTempView("orders")
spark.sql(
"""
|SELECT
| *,
| AGGREGATE(`array`, 0.0, (accumulator, item) -> accumulator + item.price) AS totalPrice
|FROM
| orders
|""".stripMargin).show()

Spark explode nested JSON with Array in Scala

Lets say i loaded a json file into Spark 1.6 via
sqlContext.read.json("/hdfs/")
it gives me a Dataframe with following schema:
root
|-- id: array (nullable = true)
| |-- element: string (containsNull = true)
|-- checked: array (nullable = true)
| |-- element: string (containsNull = true)
|-- color: array (nullable = true)
| |-- element: string (containsNull = true)
|-- type: array (nullable = true)
| |-- element: string (containsNull = true)
The DF has only one row with an Array of all my Items inside.
+--------------------+--------------------+--------------------+
| id_e| checked_e| color_e|
+--------------------+--------------------+--------------------+
|[0218797c-77a6-45...|[false, true, tru...|[null, null, null...|
+--------------------+--------------------+--------------------+
I want to have a DF with the arrays exploded into one item per line.
+--------------------+-----+-------+
| id|color|checked|
+--------------------+-----+-------+
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
|0218797c-77a6-45f...| null| false|
...
So far i achieved this by creating a temporary table from the array DF and used sql with lateral view explode on these lines.
val results = sqlContext.sql("
SELECT id, color, checked from temptable
lateral view explode(checked_e) temptable as checked
lateral view explode(id_e) temptable as id
lateral view explode(color_e) temptable as color
")
Is there any way to achieve this directly with Dataframe functions without using SQL? I know there is something like df.explode(...) but i could not get it to work with my Data
EDIT: It seems the explode isnt what i really wanted in the first place.
I want a new dataframe that has each item of the arrays line by line. The explode function actually gives back way more lines than my initial dataset has.
The following solution should work.
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions._
val data = Seq((Seq(1,2,3),Seq(4,5,6),Seq(7,8,9)))
val df = sqlContext.createDataFrame(data)
val udf3 = udf[Seq[(Int, Int, Int)], Seq[Int], Seq[Int], Seq[Int]]{
case (a, b, c) => (a,b, c).zipped.toSeq
}
val df3 = df.select(udf3($"_1", $"_2", $"_3").alias("udf3"))
val exploded = df3.select(explode($"udf3").alias("col3"))
exploded.withColumn("first", $"col3".getItem("_1"))
.withColumn("second", $"col3".getItem("_2"))
.withColumn("third", $"col3".getItem("_3")).show
While it is more straightforward if using normal Scala code directly. It might be more efficient too. Spark could not help anyway if there is only one row.
val data = Seq((Seq(1,2,3),Seq(4,5,6),Seq(7,8,9)))
val seqExploded = data.flatMap{
case (a: Seq[Int], b: Seq[Int], c: Seq[Int]) => (a, b, c).zipped.toSeq
}
val dfTheSame=sqlContext.createDataFrame(seqExploded)
dfTheSame.show
It should be simple like this:
df.withColumn("id", explode(col("id_e")))
.withColumn("checked", explode(col("checked_e")))
.withColumn("color", explode(col("color_e")))

Resources