Compare commits
3 Commits
0db83994d0
...
a68ccaef20
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a68ccaef20 | ||
|
|
96f117ddf9 | ||
|
|
02128a74ab |
81
KONVER_STRATEGY.md
Normal file
81
KONVER_STRATEGY.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Konver.ai Integration: Strategie & Architektur
|
||||
|
||||
**Status:** Vertrag unterzeichnet (Fokus: Telefon-Enrichment).
|
||||
**Risiko:** Wegfall von Dealfront (Lead Gen) ohne adäquaten, automatisierten Ersatz.
|
||||
**Ziel:** Nutzung von Konver.ai nicht nur als manuelles "Telefonbuch", sondern als **skalierbare Quelle** für die Lead-Fabrik (Company Explorer).
|
||||
|
||||
## 1. Das Zielszenario (The "Golden Flow")
|
||||
|
||||
Wir integrieren Konver.ai via API direkt in den Company Explorer. Der CE fungiert als Gatekeeper, um Credits zu sparen und Dubletten zu verhindern.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph "RoboPlanet Ecosystem"
|
||||
Notion[("Notion Strategy\n(Verticals/Pains)")]
|
||||
SO[("SuperOffice CRM\n(Bestand)")]
|
||||
CE["Company Explorer\n(The Brain)"]
|
||||
end
|
||||
|
||||
subgraph "External Sources"
|
||||
Konver["Konver.ai API"]
|
||||
Web["Web / Google / Wiki"]
|
||||
end
|
||||
|
||||
%% Data Flow
|
||||
Notion -->|1. Sync Strategy| CE
|
||||
SO -->|2. Import Existing (Blocklist)| CE
|
||||
|
||||
CE -->|3. Search Query + Exclusion List| Konver
|
||||
Note right of Konver: "Suche: Altenheime > 10 Mio\nExclude: Domain-Liste aus SO"
|
||||
|
||||
Konver -->|4. Net New Candidates| CE
|
||||
|
||||
CE -->|5. Deep Dive (Robotik-Check)| Web
|
||||
|
||||
CE -->|6. Enrich Contact (Phone/Mail)| Konver
|
||||
Note right of CE: "Nur für Firmen mit\nhohem Robotik-Score!"
|
||||
|
||||
CE -->|7. Export Qualified Lead| SO
|
||||
```
|
||||
|
||||
## 2. Die kritische Lücke: "Exclusion List"
|
||||
|
||||
Da Dealfront (unser bisheriges "Fischnetz") abgeschaltet wird, müssen wir Konver zur **Neukunden-Generierung** nutzen.
|
||||
Ohne eine **Ausschluss-Liste (Exclusion List)** bei der Suche verbrennen wir Geld und Zeit:
|
||||
|
||||
1. **Kosten:** Wir zahlen Credits für Firmen/Kontakte, die wir schon haben.
|
||||
2. **Daten-Hygiene:** Wir importieren Dubletten, die wir mühsam bereinigen müssen.
|
||||
3. **Blindflug:** Wir wissen vor dem Kauf nicht, ob der Datensatz "netto neu" ist.
|
||||
|
||||
### Forderung an Konver (Technisches Onboarding)
|
||||
|
||||
*"Um Konver.ai als strategischen Nachfolger für Dealfront in unserer Marketing-Automation nutzen zu können, benötigen wir zwingend API-Funktionen zur **Deduplizierung VOR dem Datenkauf**."*
|
||||
|
||||
**Konkrete Features:**
|
||||
* **Domain-Exclusion:** Upload einer Liste (z.B. 5.000 Domains), die in der API-Suche *nicht* zurückgegeben werden.
|
||||
* **Contact-Check:** Prüfung (z.B. Hash-Abgleich), ob eine E-Mail-Adresse bereits "bekannt" ist, bevor Kontaktdaten enthüllt (und berechnet) werden.
|
||||
|
||||
## 3. Workflow-Varianten
|
||||
|
||||
### A. Der "Smart Enricher" (Wirtschaftlich)
|
||||
Wir nutzen Konver nur für Firmen, die **wirklich** relevant sind.
|
||||
|
||||
1. **Scraping:** Company Explorer findet 100 Altenheime (Web-Suche).
|
||||
2. **Filterung:** KI prüft Websites -> 40 davon sind relevant (haben große Flächen).
|
||||
3. **Enrichment:** Nur für diese 40 fragen wir Konver via API: *"Gib mir den Facility Manager + Handy"*.
|
||||
4. **Ergebnis:** Wir zahlen 40 Credits statt 100. Hohe Effizienz.
|
||||
|
||||
### B. Der "Mass Loader" (Teuer & Dumm - zu vermeiden)
|
||||
1. Wir laden "Alle Altenheime" aus Konver direkt nach SuperOffice.
|
||||
2. Wir zahlen 100 Credits.
|
||||
3. Der Vertrieb ruft an -> 60 davon sind ungeeignet (zu klein, kein Bedarf).
|
||||
4. **Ergebnis:** 60 Credits verbrannt, Vertrieb frustriert.
|
||||
|
||||
## 4. Fazit & Next Steps
|
||||
|
||||
Wir müssen im Onboarding-Gespräch klären:
|
||||
1. **API-Doku:** Wo ist die Dokumentation für `Search` und `Enrich` Endpoints?
|
||||
2. **Exclusion:** Wie filtern wir Bestandskunden im API-Call?
|
||||
3. **Bulk-Enrichment:** Können wir Listen (Domains) zum Anreichern hochladen?
|
||||
|
||||
Ohne diese Features ist Konver ein Rückschritt in die manuelle Einzelbearbeitung.
|
||||
66
scripts/append_task_updates.py
Normal file
66
scripts/append_task_updates.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv(dotenv_path="/home/node/clawd/.env")
|
||||
|
||||
NOTION_TOKEN = os.getenv("NOTION_API_KEY")
|
||||
HEADERS = {
|
||||
"Authorization": f"Bearer {NOTION_TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
"Notion-Version": "2022-06-28"
|
||||
}
|
||||
|
||||
# IDs from yesterday
|
||||
TASKS = {
|
||||
"Pains Gains Vertical": "2ff88f42-8544-8050-8245-c3bb852058f4",
|
||||
"Segmentierung Bestand": "2ff88f42-8544-808f-862b-c30ab2f29783",
|
||||
"Matrixmultiplikation": "2ff88f42-8544-8079-a23e-c248e35b09a0"
|
||||
}
|
||||
|
||||
UPDATES = {
|
||||
"Pains Gains Vertical": "Update 17.02.: ✅ Entwurf in Notion finalisiert und detailliert (inkl. Hygiene-Fokus). Bereit für Review am Freitag.",
|
||||
"Segmentierung Bestand": "Update 17.02.: ✅ Company Explorer Schema erweitert (V2). Bereit für Excel-Import.",
|
||||
"Matrixmultiplikation": "Update 17.02.: ✅ Logik '3+1' (Prio Produkt + Sekundär bei Ops-Rolle) in Datenstruktur abgebildet."
|
||||
}
|
||||
|
||||
def append_block(page_id, text):
|
||||
url = f"https://api.notion.com/v1/blocks/{page_id}/children"
|
||||
payload = {
|
||||
"children": [
|
||||
{
|
||||
"object": "block",
|
||||
"type": "paragraph",
|
||||
"paragraph": {
|
||||
"rich_text": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": {
|
||||
"content": text,
|
||||
"link": None
|
||||
},
|
||||
"annotations": {
|
||||
"bold": True, # Make it stand out
|
||||
"italic": False,
|
||||
"strikethrough": False,
|
||||
"underline": False,
|
||||
"code": False,
|
||||
"color": "default"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
resp = requests.patch(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
print(f"✅ Appended to {page_id}")
|
||||
else:
|
||||
print(f"❌ Error {page_id}: {resp.text}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
for name, page_id in TASKS.items():
|
||||
if name in UPDATES:
|
||||
append_block(page_id, UPDATES[name])
|
||||
69
scripts/check_notion_tasks.py
Normal file
69
scripts/check_notion_tasks.py
Normal file
@@ -0,0 +1,69 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv(dotenv_path="/home/node/clawd/.env")
|
||||
|
||||
NOTION_TOKEN = os.getenv("NOTION_API_KEY")
|
||||
HEADERS = {
|
||||
"Authorization": f"Bearer {NOTION_TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
"Notion-Version": "2022-06-28"
|
||||
}
|
||||
|
||||
PROJECT_ID = "2ea88f42-8544-8074-9ad8-c24d283bc1c9"
|
||||
|
||||
def find_tasks_db():
|
||||
url = "https://api.notion.com/v1/search"
|
||||
payload = {"query": "Tasks", "filter": {"value": "database", "property": "object"}}
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
results = resp.json().get("results", [])
|
||||
if results:
|
||||
return results[0]['id']
|
||||
return None
|
||||
|
||||
def get_project_tasks(db_id):
|
||||
url = f"https://api.notion.com/v1/databases/{db_id}/query"
|
||||
# We look for tasks linked to the project ID via a relation property (usually "Project")
|
||||
payload = {
|
||||
"filter": {
|
||||
"property": "Project",
|
||||
"relation": {
|
||||
"contains": PROJECT_ID
|
||||
}
|
||||
}
|
||||
}
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code != 200:
|
||||
print(f"Error querying tasks: {resp.text}")
|
||||
return []
|
||||
|
||||
return resp.json().get("results", [])
|
||||
|
||||
def print_tasks():
|
||||
db_id = find_tasks_db()
|
||||
if not db_id:
|
||||
print("❌ Tasks DB not found.")
|
||||
return
|
||||
|
||||
print(f"--- Tasks for Project {PROJECT_ID} ---")
|
||||
tasks = get_project_tasks(db_id)
|
||||
|
||||
for task in tasks:
|
||||
props = task['properties']
|
||||
name = "Unknown"
|
||||
if "Name" in props and props["Name"]["title"]:
|
||||
name = props["Name"]["title"][0]["plain_text"]
|
||||
elif "Task" in props and props["Task"]["title"]:
|
||||
name = props["Task"]["title"][0]["plain_text"]
|
||||
|
||||
status = "Unknown"
|
||||
if "Status" in props and props["Status"]["status"]:
|
||||
status = props["Status"]["status"]["name"]
|
||||
|
||||
print(f"- [{status}] {name} ({task['id']})")
|
||||
|
||||
if __name__ == "__main__":
|
||||
print_tasks()
|
||||
114
scripts/post_daily_log_to_notion.py
Normal file
114
scripts/post_daily_log_to_notion.py
Normal file
@@ -0,0 +1,114 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv(dotenv_path="/home/node/clawd/.env")
|
||||
|
||||
NOTION_TOKEN = os.getenv("NOTION_API_KEY")
|
||||
HEADERS = {
|
||||
"Authorization": f"Bearer {NOTION_TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
"Notion-Version": "2022-06-28"
|
||||
}
|
||||
|
||||
PROJECT_ID = "2ea88f42-8544-8074-9ad8-c24d283bc1c9"
|
||||
|
||||
def find_tasks_db():
|
||||
url = "https://api.notion.com/v1/search"
|
||||
payload = {"query": "Tasks", "filter": {"value": "database", "property": "object"}}
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
results = resp.json().get("results", [])
|
||||
if results:
|
||||
return results[0]['id']
|
||||
return None
|
||||
|
||||
def read_memory():
|
||||
try:
|
||||
with open("memory/2026-02-17.md", "r") as f:
|
||||
return f.readlines()
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
def parse_markdown_to_blocks(lines):
|
||||
blocks = []
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if line.startswith("# "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_1",
|
||||
"heading_1": {"rich_text": [{"type": "text", "text": {"content": line[2:]}}]}
|
||||
})
|
||||
elif line.startswith("## "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_2",
|
||||
"heading_2": {"rich_text": [{"type": "text", "text": {"content": line[3:]}}]}
|
||||
})
|
||||
elif line.startswith("### "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_3",
|
||||
"heading_3": {"rich_text": [{"type": "text", "text": {"content": line[4:]}}]}
|
||||
})
|
||||
elif line.startswith("- "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "bulleted_list_item",
|
||||
"bulleted_list_item": {"rich_text": [{"type": "text", "text": {"content": line[2:]}}]}
|
||||
})
|
||||
else:
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "paragraph",
|
||||
"paragraph": {"rich_text": [{"type": "text", "text": {"content": line}}]}
|
||||
})
|
||||
return blocks
|
||||
|
||||
def create_log_entry():
|
||||
db_id = find_tasks_db()
|
||||
if not db_id:
|
||||
print("❌ Tasks DB not found via search.")
|
||||
return
|
||||
|
||||
lines = read_memory()
|
||||
children_blocks = parse_markdown_to_blocks(lines)
|
||||
|
||||
url = "https://api.notion.com/v1/pages"
|
||||
|
||||
# Try creating with "Name", if fails we might need to check schema, but usually it's Name or Task.
|
||||
# We'll stick to "Name" as it's most standard, but based on error before, maybe the DB was wrong.
|
||||
|
||||
payload = {
|
||||
"parent": {"database_id": db_id},
|
||||
"properties": {
|
||||
"Name": {"title": [{"text": {"content": "Tages-Log 17.02.2026"}}]},
|
||||
"Status": {"status": {"name": "Done"}},
|
||||
"Project": {"relation": [{"id": PROJECT_ID}]}
|
||||
},
|
||||
"children": children_blocks[:100]
|
||||
}
|
||||
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
print("✅ Tages-Log in Notion erstellt.")
|
||||
else:
|
||||
# If Name fails, try Task
|
||||
if "Name is not a property" in resp.text:
|
||||
payload["properties"].pop("Name")
|
||||
payload["properties"]["Task"] = {"title": [{"text": {"content": "Tages-Log 17.02.2026"}}]}
|
||||
resp2 = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp2.status_code == 200:
|
||||
print("✅ Tages-Log in Notion erstellt (Property 'Task').")
|
||||
else:
|
||||
print(f"❌ Fehler (Retry): {resp2.text}")
|
||||
else:
|
||||
print(f"❌ Fehler: {resp.text}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_log_entry()
|
||||
91
scripts/post_retro_log.py
Normal file
91
scripts/post_retro_log.py
Normal file
@@ -0,0 +1,91 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv(dotenv_path="/home/node/clawd/.env")
|
||||
|
||||
NOTION_TOKEN = os.getenv("NOTION_API_KEY")
|
||||
HEADERS = {
|
||||
"Authorization": f"Bearer {NOTION_TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
"Notion-Version": "2022-06-28"
|
||||
}
|
||||
|
||||
PROJECT_ID = "2ea88f42-8544-8074-9ad8-c24d283bc1c9"
|
||||
|
||||
def find_tasks_db():
|
||||
url = "https://api.notion.com/v1/search"
|
||||
payload = {"query": "Tasks", "filter": {"value": "database", "property": "object"}}
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
results = resp.json().get("results", [])
|
||||
if results:
|
||||
return results[0]['id']
|
||||
return None
|
||||
|
||||
def create_log_entry():
|
||||
db_id = find_tasks_db()
|
||||
if not db_id:
|
||||
print("❌ Tasks DB not found.")
|
||||
return
|
||||
|
||||
# Content for 16.02.
|
||||
content = """# Tages-Log: 16.02.2026 (Nachtrag)
|
||||
|
||||
## Zusammenfassung
|
||||
Durchbruch bei der technischen Integration zwischen SuperOffice CRM und Company Explorer. Der bidirektionale Datenaustausch steht.
|
||||
|
||||
## Erreichte Meilensteine
|
||||
|
||||
### 1. SuperOffice Integration (Deep Dive)
|
||||
- **Status:** ✅ **POC Erfolgreich.**
|
||||
- **Token-Management:** Automatische Refresh-Logik implementiert (kein manuelles Login mehr nötig).
|
||||
- **Write-Back:** Erfolgreiches Update von Firmen-Daten (Adresse, VAT, URL) in SuperOffice.
|
||||
- **Hürden genommen:**
|
||||
- **Pflichtfelder:** Fehler mit `Number2` (unbekanntes Pflichtfeld) identifiziert und umgangen.
|
||||
- **Listen-Objekte:** Korrekte Syntax für das Update von Dropdowns (Branche) gefunden (`Select` vs `Id`).
|
||||
|
||||
### 2. Company Explorer Connector
|
||||
- **Status:** ✅ **Client fertig.**
|
||||
- **Workflow:** Skript `company_explorer_connector.py` steuert jetzt den Upload von Firmen und das Abholen der Ergebnisse.
|
||||
|
||||
### 3. Regeln der Zusammenarbeit
|
||||
- **Core Directive V2.0:** Fokus auf "Ehrlicher Partner" und präzise technische Umsetzung ohne Floskeln definiert.
|
||||
|
||||
## Fazit
|
||||
Die "Rohre" zwischen den Systemen sind verlegt. Daten können fließen.
|
||||
"""
|
||||
|
||||
blocks = []
|
||||
for line in content.split('\n'):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "paragraph",
|
||||
"paragraph": {"rich_text": [{"type": "text", "text": {"content": line}}]}
|
||||
})
|
||||
|
||||
url = "https://api.notion.com/v1/pages"
|
||||
payload = {
|
||||
"parent": {"database_id": db_id},
|
||||
"properties": {
|
||||
"Name": {"title": [{"text": {"content": "Tages-Log 16.02.2026 (Nachtrag)"}}]},
|
||||
"Status": {"status": {"name": "Done"}},
|
||||
"Project": {"relation": [{"id": PROJECT_ID}]}
|
||||
},
|
||||
"children": blocks[:90]
|
||||
}
|
||||
|
||||
resp = requests.post(url, headers=HEADERS, json=payload)
|
||||
if resp.status_code == 200:
|
||||
print("✅ Nachtrag 16.02. erstellt.")
|
||||
else:
|
||||
# Fallback Name/Task check
|
||||
if "Name is not a property" in resp.text:
|
||||
payload["properties"].pop("Name")
|
||||
payload["properties"]["Task"] = {"title": [{"text": {"content": "Tages-Log 16.02.2026 (Nachtrag)"}}]}
|
||||
requests.post(url, headers=HEADERS, json=payload)
|
||||
print("✅ Nachtrag 16.02. erstellt (Fallback).")
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_log_entry()
|
||||
70
scripts/update_notion_tasks_status.py
Normal file
70
scripts/update_notion_tasks_status.py
Normal file
@@ -0,0 +1,70 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv(dotenv_path="/home/node/clawd/.env")
|
||||
|
||||
NOTION_TOKEN = os.getenv("NOTION_API_KEY")
|
||||
HEADERS = {
|
||||
"Authorization": f"Bearer {NOTION_TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
"Notion-Version": "2022-06-28"
|
||||
}
|
||||
|
||||
# IDs from previous fetch
|
||||
TASKS = {
|
||||
"Setup GCP": "2ea88f42-8544-8073-b287-eb83ce581c0b",
|
||||
"SO API POC": "2ff88f42-8544-8093-a301-fc27b3886aa1",
|
||||
"Pains Gains Vertical": "2ff88f42-8544-8050-8245-c3bb852058f4",
|
||||
"Segmentierung Bestand": "2ff88f42-8544-808f-862b-c30ab2f29783",
|
||||
"Matrixmultiplikation": "2ff88f42-8544-8079-a23e-c248e35b09a0"
|
||||
}
|
||||
|
||||
TASKS_DB_ID = "30588f42-8544-80c3-8919-e22d74d945ea" # From discovery
|
||||
PROJECT_ID = "2ea88f42-8544-8074-9ad8-c24d283bc1c9"
|
||||
|
||||
def update_status(page_id, status):
|
||||
url = f"https://api.notion.com/v1/pages/{page_id}"
|
||||
payload = {"properties": {"Status": {"status": {"name": status}}}}
|
||||
requests.patch(url, headers=HEADERS, json=payload)
|
||||
print(f"Updated Status {page_id} -> {status}")
|
||||
|
||||
def add_comment(page_id, text):
|
||||
url = "https://api.notion.com/v1/comments"
|
||||
payload = {
|
||||
"parent": {"page_id": page_id},
|
||||
"rich_text": [{"text": {"content": text}}]
|
||||
}
|
||||
requests.post(url, headers=HEADERS, json=payload)
|
||||
print(f"Added comment to {page_id}")
|
||||
|
||||
def create_task(title):
|
||||
url = "https://api.notion.com/v1/pages"
|
||||
payload = {
|
||||
"parent": {"database_id": TASKS_DB_ID},
|
||||
"properties": {
|
||||
"Name": {"title": [{"text": {"content": title}}]},
|
||||
"Status": {"status": {"name": "To Do"}},
|
||||
"Project": {"relation": [{"id": PROJECT_ID}]}
|
||||
}
|
||||
}
|
||||
requests.post(url, headers=HEADERS, json=payload)
|
||||
print(f"Created Task: {title}")
|
||||
|
||||
def run():
|
||||
# 1. Done
|
||||
update_status(TASKS["Setup GCP"], "Done")
|
||||
update_status(TASKS["SO API POC"], "Done")
|
||||
|
||||
# 2. Progress Comments
|
||||
add_comment(TASKS["Pains Gains Vertical"], "✅ Entwurf in Notion finalisiert und detailliert (inkl. Hygiene-Fokus). Bereit für Review am Freitag.")
|
||||
add_comment(TASKS["Segmentierung Bestand"], "✅ Company Explorer Schema erweitert (V2). Bereit für Excel-Import.")
|
||||
add_comment(TASKS["Matrixmultiplikation"], "✅ Logik '3+1' (Prio Produkt + Sekundär bei Ops-Rolle) in Datenstruktur abgebildet.")
|
||||
|
||||
# 3. New Tasks
|
||||
create_task("Company Explorer: Daten-Sync & CRM-Import")
|
||||
create_task("SuperOffice: Definition & Anlage UDF-Felder (Intro-Text)")
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
||||
Binary file not shown.
Reference in New Issue
Block a user