From d3ea4e340a4bb85510c0af83278740fba86a8e12 Mon Sep 17 00:00:00 2001 From: Floke Date: Thu, 22 Jan 2026 20:33:55 +0000 Subject: [PATCH] feat(Notion): Refactor product DBs for GTM strategy - Implements a 3-tier database architecture (Canonical Products, Portfolio, Companies) to separate product master data from company-specific portfolio information. - Upgrades import_competitive_radar.py to an intelligent "upsert" script that prevents duplicates by checking for existing entries before importing. - This enables detailed GTM strategy tracking for RoboPlanet products while monitoring competitor portfolios. - Updates documentation to reflect the new architecture and import process. --- MIGRATION_REPORT_COMPETITOR_ANALYSIS.md | 38 ++++ Notion_Dashboard.md | 65 ++++-- import_competitive_radar.py | 272 ++++++++++++++---------- 3 files changed, 238 insertions(+), 137 deletions(-) diff --git a/MIGRATION_REPORT_COMPETITOR_ANALYSIS.md b/MIGRATION_REPORT_COMPETITOR_ANALYSIS.md index ba516182..5cb5f835 100644 --- a/MIGRATION_REPORT_COMPETITOR_ANALYSIS.md +++ b/MIGRATION_REPORT_COMPETITOR_ANALYSIS.md @@ -148,6 +148,44 @@ Neben der reinen Analyse wurde das Fundament für ein dauerhaftes Monitoring-Sys 3. **Relationaler Import (v6):** * Der Notion-Import wurde auf **v6** aktualisiert. Er unterstützt nun lückenlos die neue v5-Struktur, importiert Chain-of-Thought Beschreibungen in Rich-Text-Felder und verknüpft erstmals auch die extrahierten **Referenzkunden** relational in Notion. + +### 📡 Post-Migration Architectural Refactoring (Jan 22, 2026): From Unified Import to GTM-Ready Architecture + +**Problem Statement:** The v6 import structure, while relational, suffered from a fundamental design flaw: the informational depth of a product entry was identical for all vendors. A product sold by RoboPlanet was captured with the same superficial data fields as a competitor's product. This prevented the mapping of our detailed Go-to-Market strategies and, due to the "Purpose" texts generated by the LLM, continued to create duplicates, as identical products received slightly different descriptions. + +**Strategic Decision: Separation of "What" from "How"** + +To create a true "Single Source of Truth" and to enrich our own products with the necessary depth (GTM strategy, support knowledge, etc.), a new architecture was decided upon. The system is being converted from a 2-tier model (Companies ↔ Products) to a 3-tier model. + +**The New 3-Tier Architecture:** + +1. **🆕 Canonical Products (The "WHAT" Database):** + * A completely new database that contains each product model (e.g., "Puma M20") **only once**. + * Content: Objective master data (manufacturer, model, technical specs) and a relation to the `Product Categories` DB. + * This is the market-wide, vendor-neutral truth. + +2. ** repurposed Portfolio (The "HOW" Database):** + * The existing `Competitive Radar (Products)` database is being repurposed and renamed. + * It functions as a **junction table**. Each entry represents the relationship between a vendor and a canonical product. + * Example Entries: + * "RoboPlanet sells Puma M20" + * "TCO Robotics sells Puma M20" + * **Crucially:** This database contains the **context-specific information**. For RoboPlanet entries, the fields for the complete GTM strategy (target industries, pain points, battle cards, ROI logic, etc.) will be stored here. For competitor entries, these fields will remain empty. + +3. ** Companies (The "WHO" Database):** + * Remains unchanged and contains the master data of the competitors and of RoboPlanet itself. + +**New Relational Link:** +`[Canonical Products]` ↔️ `[Portfolio]` ↔️ `[Companies]` + +**Migration Plan:** +A one-time Python script will perform the "Operation Clean Architecture": +1. **Schema Transformation:** Creation of the `Canonical Products` DB, conversion of the old product DB to the `Portfolio` DB. +2. **Intelligent Migration:** Reading of the old entries, creation of the unique entries in `Canonical Products`, and subsequent re-linking of the (now) `Portfolio` entries. +3. **Categorization:** Assignment of the canonical products to the global `Product Categories`. + +**Result:** This refactoring elevates the system from a pure market intelligence tool to a true **Strategic Marketing OS** that can directly map and support our own sales and marketing processes. + --- *Dokumentation finalisiert am 12.01.2026.* diff --git a/Notion_Dashboard.md b/Notion_Dashboard.md index ccc28e7b..f4172fd1 100644 --- a/Notion_Dashboard.md +++ b/Notion_Dashboard.md @@ -39,11 +39,12 @@ Die Schaltstelle für die hyper-personalisierte Ansprache. * **Logik:** Trennung in **Satz 1** (Individueller Hook basierend auf der aktuellen Website-Analyse des Zielkunden) und **Satz 2** (Relationaler Lösungsbaustein basierend auf Branche + Produkt). * **Voice-Ready:** Vorbereitung von Skripten für den zukünftigen Voice-KI-Einsatz im Vertrieb und Support. -### 3.4 Competitive Radar (Market Intelligence v2.0) -Automatisierte Überwachung der Marktbegleiter mit Fokus auf "Grounded Truth". -* **Funktion:** Kontinuierliches Scraping von Wettbewerber-Webseiten, gezielte Suche nach Referenzkunden und Case Studies. -* **Kill-Argumente & Landmines:** Erstellung von strukturierten Battlecards und spezifischen "Landmine Questions" für den Sales-Außendienst. -* **Relationaler Ansatz:** Trennung in vier verknüpfte Datenbanken (Firmen, Landmines, Referenzen, Produkte) für maximale Filterbarkeit und Übersicht. +### 3.4 Competitive Radar & GTM Engine (Market Intelligence v3.0) +Automatisierte Überwachung der Marktbegleiter *und* zentrale Steuerung der eigenen Go-to-Market-Strategien. +* **Architektur-Upgrade (Jan 2026):** Das System wurde auf eine 3-Tier-Architektur umgestellt, um zwischen dem **kanonischen Produkt** (was es ist) und dem **Portfolio-Eintrag** (wer es zu welchen Konditionen/mit welcher Strategie verkauft) zu unterscheiden. +* **Funktion (Market Intelligence):** Kontinuierliches Scraping von Wettbewerber-Webseiten zur Identifikation ihres Produkt-Portfolios, ihrer Referenzkunden und zur Erstellung von "Landmines" (Angriffsfragen). +* **Funktion (GTM Engine):** Für RoboPlanet-eigene Produkte dient das System als "Single Source of Truth". Es erfasst die komplette GTM-Strategie – von der technischen Analyse über die Definition der Zielbranchen (ICPs) und Schmerzpunkte bis hin zur Erstellung von Sales-Battlecards und ROI-Rechnern. +* **Relationaler Kern:** Das System besteht nun aus einem Netz von Datenbanken, dessen Herzstück die Triade `Canonical Products` ↔️ `Portfolio` ↔️ `Companies` ist. Dies ermöglicht es, ein Produkt (z.B. "Puma M20") einmal zentral zu definieren und dann spezifische Marketing- und Vertriebsstrategien für den Verkauf durch RoboPlanet anzuhängen, während gleichzeitig erfasst wird, welche Wettbewerber dasselbe Produkt führen. ### 3.5 Enrichment Factory & RevOps Datenanreicherung der CRM-Accounts. @@ -61,13 +62,22 @@ Datenanreicherung der CRM-Accounts. 3. **Positioning:** KI matcht Specs gegen die **Market Psychology DB** in Notion. 4. **Generation:** Erstellung von Website-Inhalten (WordPress API) und Sales-Battlecards. -### B. Der Outbound-Prozess (Whale Hunting) -1. **Scanning:** Enrichment-Tool liest Ziel-Accounts aus SuperOffice. -2. **Hyper-Personalization:** - * KI analysiert Kunden-Website -> Generiert **Satz 1** (Operative Herausforderung). - * System zieht **Satz 2** aus der **Messaging Matrix** in Notion. -3. **CRM-Injection:** Der finale Text wird via API in SuperOffice injiziert. -4. **Execution:** Vertrieb sendet hochgradig relevante Mails direkt aus dem gewohnten CRM. +### C. Der Market-Intelligence-Prozess (Inkrementeller Import) +Nach der Umstrukturierung der Datenbanken wurde der Importprozess von einem reinen "Einmal-Import" zu einem intelligenten, zustandsbewussten Synchronisierungs-Workflow weiterentwickelt. Dies ist der Schlüssel, um das System als lebendes "Market Radar"-Tool zu nutzen. + +1. **Trigger:** Manueller Start des `import_competitive_radar.py` Skripts mit einer neuen Analyse-JSON-Datei als Input. +2. **Phase 1: State Awareness (IST-Zustand lesen):** + * Bevor das Skript die neue Datei liest, fragt es den aktuellen Stand in Notion ab. + * Es erstellt drei "Caches" im Arbeitsspeicher: eine Liste aller existierenden Firmen, eine Liste aller `Canonical Products` und eine Liste aller `Portfolio`-Verknüpfungen (welche Firma verkauft welches Produkt). +3. **Phase 2: Abgleich & "Upsert" (SOLL-Zustand verarbeiten):** + * Das Skript liest die neue JSON-Datei Wettbewerber für Wettbewerber. + * **"Upsert" Logik:** Für jeden Eintrag (Firma, Produkt, Portfolio-Verknüpfung) prüft es gegen seine Caches, ob dieser bereits in Notion existiert. + * **Fall A (Existiert bereits):** Der Eintrag wird übersprungen. Es werden keine Änderungen vorgenommen. + * **Fall B (Existiert noch nicht):** Nur der fehlende Eintrag wird erstellt. Wenn z.B. das Produkt "BellaBot" schon existiert, aber die Firma "Robo-Heroes" neu ist, wird nur die neue Firma und die neue Portfolio-Verknüpfung ("Robo-Heroes verkauft BellaBot") angelegt. +4. **Ergebnis (Idempotenter Import):** + * **Keine Duplikate:** Firmen und kanonische Produkte werden niemals doppelt erstellt. + * **Inkrementelle Updates:** Nur neue Informationen werden hinzugefügt. Das System wächst mit jeder Analyse, anstatt überschrieben zu werden. + * **Sicherheit:** Das Skript kann beliebig oft mit derselben Datei ausgeführt werden. Nach dem ersten Lauf wird es keine Änderungen mehr vornehmen, da es erkennt, dass alle Daten bereits synchronisiert sind. --- @@ -84,19 +94,28 @@ Reduzierte Notion-Ansicht für Vertriebler vor Ort, die basierend auf dem Stando --- -## 6. Notion Datenbank-Relationen (Technisches Mapping) +## 6. Notion Datenbank-Relationen (Technisches Mapping - Architektur v3.0) -Um die relationale Integrität zu wahren, sind folgende Datenbanken in Notion zwingend zu verknüpfen: +Um die relationale Integrität zu wahren, sind folgende Datenbanken in Notion zwingend zu verknüpfen. Das Modell trennt zwischen anbieterneutralen Stammdaten (`Canonical Products`) und der anbieterspezifischen Vertriebssicht (`Portfolio`). -* **Product Master** $\leftrightarrow$ **Sector Master** (Welcher Roboter passt in welchen Markt?) -* **Messaging Matrix** $\leftrightarrow$ **Product Master** (Welche Lösung gehört zum Text?) -* **Messaging Matrix** $\leftrightarrow$ **Sector Master** (Welcher Schmerz gehört zu welcher Branche?) -* **Competitive Radar (Companies)** $\leftrightarrow$ **Competitive Radar (Landmines)** (Welche Angriffsfragen gehören zu welchem Wettbewerber?) -* **Competitive Radar (Companies)** $\leftrightarrow$ **Competitive Radar (References)** (Welche Kundenprojekte hat der Wettbewerber realisiert?) -* **Competitive Radar (Companies)** $\leftrightarrow$ **Competitive Radar (Products)** (Welche Produkte hat der Wettbewerber im Portfolio?) -* **The Brain** $\leftrightarrow$ **Product Master** (Welches Support-Wissen gehört zu welcher Hardware?) -* **GTM Workspace** $\leftrightarrow$ **Product Master** (Welche Kampagne bewirbt welches Gerät?) -* **Feature-to-Value Translator** $\leftrightarrow$ **Product Master** (Welcher Nutzen gehört zu welchem Feature?) +* **Kern-Relationen (Produkt & Markt):** + * **Canonical Products** ↔️ **Portfolio** (Ein kanonisches Produkt kann in mehreren Portfolios sein) + * **Companies** ↔️ **Portfolio** (Ein Unternehmen hat mehrere Produkte im Portfolio) + * **Canonical Products** ↔️ **Product Categories** (Jedes Produkt gehört zu einer Kategorie) + +* **GTM & Marketing-Relationen:** + * **Canonical Products** ↔️ **Sector & Persona Master** (Welcher Roboter passt in welchen Markt?) + * **Messaging Matrix** ↔️ **Canonical Products** (Welche Lösung gehört zum Text?) + * **Messaging Matrix** ↔️ **Sector & Persona Master** (Welcher Schmerz gehört zu welcher Branche?) + * **GTM Workspace** ↔️ **Canonical Products** (Welche Kampagne bewirbt welches Gerät?) + * **Feature-to-Value Translator** ↔️ **Canonical Products** (Welcher Nutzen gehört zu welchem Feature?) + +* **Competitive Intelligence-Relationen:** + * **Companies** ↔️ **Competitive Radar (Landmines)** (Welche Angriffsfragen gehören zu welchem Wettbewerber?) + * **Companies** ↔️ **Competitive Radar (References)** (Welche Kundenprojekte hat der Wettbewerber realisiert?) + +* **Wissensmanagement-Relationen:** + * **The Brain** ↔️ **Canonical Products** (Welches Support-Wissen gehört zu welcher Hardware?) --- diff --git a/import_competitive_radar.py b/import_competitive_radar.py index d709d579..dbf3aa47 100644 --- a/import_competitive_radar.py +++ b/import_competitive_radar.py @@ -1,135 +1,179 @@ import json -import os import requests import sys -# Configuration +# --- CONFIGURATION --- JSON_FILE = 'analysis_robo-planet.de-4.json' -TOKEN_FILE = 'notion_token.txt' -PARENT_PAGE_ID = "2e088f42-8544-8024-8289-deb383da3818" +NOTION_TOKEN = "" # Will be loaded from file +HEADERS = { + "Authorization": f"Bearer {NOTION_TOKEN}", + "Content-Type": "application/json", + "Notion-Version": "2022-06-28", +} -# Database Titles -DB_TITLE_HUB = "📦 Competitive Radar (Companies) v6" -DB_TITLE_LANDMINES = "💣 Competitive Radar (Landmines) v6" -DB_TITLE_REFS = "🏆 Competitive Radar (References) v6" -DB_TITLE_PRODUCTS = "🤖 Competitive Radar (Products) v6" +# --- DATABASE IDs --- +COMPANIES_DB_ID = "2e688f42-8544-8158-8673-d8b1e3eca5b5" +CANONICAL_PRODUCTS_DB_ID = "2f088f42-8544-81d5-bec7-d9189f3bacd4" +PORTFOLIO_DB_ID = "2e688f42-8544-81df-8fcc-f1d7f8745e00" +LANDMINES_DB_ID = "" # Optional: Add if you want to re-import landmines +REFERENCES_DB_ID = "" # Optional: Add if you want to re-import references -def load_json_data(filepath): - with open(filepath, 'r') as f: - return json.load(f) +# --- API HELPERS --- +def query_db(db_id, filter_payload=None): + """Retrieves all pages from a Notion database, with optional filter.""" + url = f"https://api.notion.com/v1/databases/{db_id}/query" + all_pages = [] + start_cursor = None + + while True: + payload = {} + if start_cursor: + payload["start_cursor"] = start_cursor + if filter_payload: + payload["filter"] = filter_payload + + response = requests.post(url, headers=HEADERS, json=payload) + + if response.status_code != 200: + print(f"Error querying DB {db_id}: {response.status_code}") + print(response.json()) + return None + +data = response.json() + all_pages.extend(data["results"]) + + if data.get("has_more"): + start_cursor = data["next_cursor"] + else: + break + + return all_pages -def load_notion_token(filepath): - with open(filepath, 'r') as f: - return f.read().strip() - -def create_database(token, parent_id, title, properties): - url = "https://api.notion.com/v1/databases" - headers = {"Authorization": f"Bearer {token}", "Notion-Version": "2022-06-28", "Content-Type": "application/json"} - payload = {"parent": {"type": "page_id", "page_id": parent_id}, "title": [{"type": "text", "text": {"content": title}}], "properties": properties} - r = requests.post(url, headers=headers, json=payload) - if r.status_code != 200: - print(f"Error creating DB '{title}': {r.text}") - sys.exit(1) - return r.json()['id'] - -def create_page(token, db_id, properties): +def create_page(db_id, properties): + """Creates a new page in a Notion database.""" url = "https://api.notion.com/v1/pages" - headers = {"Authorization": f"Bearer {token}", "Notion-Version": "2022-06-28", "Content-Type": "application/json"} payload = {"parent": {"database_id": db_id}, "properties": properties} - r = requests.post(url, headers=headers, json=payload) - if r.status_code != 200: - print(f"Error creating page: {r.text}") - return r.json().get('id') + + response = requests.post(url, headers=HEADERS, data=json.dumps(payload)) + if response.status_code == 200: + return response.json() + else: + print(f"Error creating page in DB {db_id}: {response.status_code}") + print(response.json()) + return None +# --- STATE AWARENESS HELPERS --- +def get_existing_items_map(db_id, name_property="Name"): + """Fetches all items from a DB and returns a map of {name: id}.""" + print(f"Fetching existing items from DB {db_id} to build cache...") + pages = query_db(db_id) + if pages is None: + sys.exit(f"Could not fetch items from DB {db_id}. Aborting.") + + item_map = {} + for page in pages: + try: + item_name = page["properties"][name_property]["title"][0]["text"]["content"] + item_map[item_name] = page["id"] + except (KeyError, IndexError): + continue + print(f" - Found {len(item_map)} existing items.") + return item_map + +def get_existing_portfolio_links(db_id): + """Fetches all portfolio links and returns a set of (company_id, product_id) tuples.""" + print(f"Fetching existing portfolio links from DB {db_id}...") + pages = query_db(db_id) + if pages is None: + sys.exit(f"Could not fetch portfolio links from DB {db_id}. Aborting.") + + link_set = set() + for page in pages: + try: + company_id = page["properties"]["Related Competitor"]["relation"][0]["id"] + product_id = page["properties"]["Canonical Product"]["relation"][0]["id"] + link_set.add((company_id, product_id)) + except (KeyError, IndexError): + continue + print(f" - Found {len(link_set)} existing portfolio links.") + return link_set + +# --- MAIN LOGIC --- def main(): - token = load_notion_token(TOKEN_FILE) - data = load_json_data(JSON_FILE) - - print("🚀 Level 5 Import starting (v6 Databases)...") - - # 1. Create Databases - hub_id = create_database(token, PARENT_PAGE_ID, DB_TITLE_HUB, { - "Name": {"title": {}}, - "Website": {"url": {}}, - "Target Industries": {"multi_select": {}} - }) - - lm_id = create_database(token, PARENT_PAGE_ID, DB_TITLE_LANDMINES, { - "Question": {"title": {}}, - "Topic": {"select": {}}, - "Related Competitor": {"relation": {"database_id": hub_id, "dual_property": {"synced_property_name": "Landmines"}}} - }) - - prod_id = create_database(token, PARENT_PAGE_ID, DB_TITLE_PRODUCTS, { - "Product": {"title": {}}, - "Category": {"select": {}}, - "Purpose": {"rich_text": {}}, - "Related Competitor": {"relation": {"database_id": hub_id, "dual_property": {"synced_property_name": "Products"}}} - }) + global NOTION_TOKEN, HEADERS + try: + with open("notion_token.txt", "r") as f: + NOTION_TOKEN = f.read().strip() + HEADERS["Authorization"] = f"Bearer {NOTION_TOKEN}" + except FileNotFoundError: + print("Error: `notion_token.txt` not found.") + return - ref_id = create_database(token, PARENT_PAGE_ID, DB_TITLE_REFS, { - "Customer": {"title": {}}, - "Industry": {"select": {}}, - "Quote": {"rich_text": {}}, - "Related Competitor": {"relation": {"database_id": hub_id, "dual_property": {"synced_property_name": "References"}}} - }) + # --- Phase 1: State Awareness --- + print("\n--- Phase 1: Reading current state from Notion ---") + companies_map = get_existing_items_map(COMPANIES_DB_ID) + products_map = get_existing_items_map(CANONICAL_PRODUCTS_DB_ID) + portfolio_links = get_existing_portfolio_links(PORTFOLIO_DB_ID) + + # --- Phase 2: Processing JSON --- + print("\n--- Phase 2: Processing local JSON file ---") + try: + with open(JSON_FILE, 'r') as f: + data = json.load(f) + except FileNotFoundError: + print(f"Error: `{JSON_FILE}` not found.") + return - # 2. Import Companies & Products - comp_map = {} for analysis in data.get('analyses', []): - c = analysis['competitor'] - name = c['name'] + competitor = analysis['competitor'] + competitor_name = competitor['name'] + print(f"\nProcessing competitor: {competitor_name}") + + # --- Phase 3: "Upsert" Company --- + if competitor_name not in companies_map: + print(f" - Company '{competitor_name}' not found. Creating...") + props = {"Name": {"title": [{"text": {"content": competitor_name}}]}} + new_company = create_page(COMPANIES_DB_ID, props) + if new_company: + companies_map[competitor_name] = new_company["id"] + else: + print(f" - Failed to create company '{competitor_name}'. Skipping.") + continue - # v5: 'target_industries' is at root level of analysis object - industries = analysis.get('target_industries', []) - - props = { - "Name": {"title": [{"text": {"content": name}}]}, - "Website": {"url": c['url'] or "https://google.com"}, - "Target Industries": {"multi_select": [{"name": i[:100].replace(',', '')} for i in industries if i]} - } - pid = create_page(token, hub_id, props) - if pid: - comp_map[name] = pid - print(f" - Created Company: {name}") + company_id = companies_map[competitor_name] + + # --- Phase 4: "Upsert" Products and Portfolio Links --- + for product in analysis.get('portfolio', []): + product_name = product['product'] + + # Upsert Canonical Product + if product_name not in products_map: + print(f" - Product '{product_name}' not found. Creating canonical product...") + props = {"Name": {"title": [{"text": {"content": product_name}}]}} + new_product = create_page(CANONICAL_PRODUCTS_DB_ID, props) + if new_product: + products_map[product_name] = new_product["id"] + else: + print(f" - Failed to create canonical product '{product_name}'. Skipping.") + continue - for prod in analysis.get('portfolio', []): - p_props = { - "Product": {"title": [{"text": {"content": prod['product'][:100]}}]}, - "Category": {"select": {"name": prod.get('category', 'Other')[:100]}}, - "Purpose": {"rich_text": [{"text": {"content": prod.get('purpose', '')[:2000]}}]}, - "Related Competitor": {"relation": [{"id": pid}]} + product_id = products_map[product_name] + + # Check and create Portfolio Link + if (company_id, product_id) not in portfolio_links: + print(f" - Portfolio link for '{competitor_name}' -> '{product_name}' not found. Creating...") + portfolio_props = { + "Product": {"title": [{"text": {"content": f"{competitor_name} - {product_name}"}}]}, + "Related Competitor": {"relation": [{"id": company_id}]}, + "Canonical Product": {"relation": [{"id": product_id}]} } - create_page(token, prod_id, p_props) + new_portfolio_entry = create_page(PORTFOLIO_DB_ID, portfolio_props) + if new_portfolio_entry: + portfolio_links.add((company_id, product_id)) # Add to cache to prevent re-creation in same run + else: + print(f" - Portfolio link for '{competitor_name}' -> '{product_name}' already exists. Skipping.") - # 3. Import Battlecards (Landmines) - for card in data.get('battlecards', []): - cid = comp_map.get(card['competitor_name']) - if not cid: continue - for q in card.get('landmine_questions', []): - # Handle both string and object formats from LLM - text = q['text'] if isinstance(q, dict) else q - cat = q.get('category', 'General') if isinstance(q, dict) else 'General' - - create_page(token, lm_id, { - "Question": {"title": [{"text": {"content": text[:100]}}]}, - "Topic": {"select": {"name": cat}}, - "Related Competitor": {"relation": [{"id": cid}]} - }) - - # 4. Import References - for ref_analysis in data.get('reference_analysis', []): - cid = comp_map.get(ref_analysis['competitor_name']) - if not cid: continue - for ref in ref_analysis.get('references', []): - create_page(token, ref_id, { - "Customer": {"title": [{"text": {"content": ref['name'][:100]}}]}, - "Industry": {"select": {"name": ref.get('industry', 'Unknown')[:100].replace(',', '')}}, - "Quote": {"rich_text": [{"text": {"content": ref.get('testimonial_snippet', '')[:2000]}}]}, - "Related Competitor": {"relation": [{"id": cid}]} - }) - - print("✅ DONE") + print("\n--- ✅ Import script finished ---") if __name__ == "__main__": - main() + main() \ No newline at end of file