4 Commits

Author SHA1 Message Date
088c665783 [31188f42] einfügen
einfügen
2026-02-24 12:18:49 +00:00
41920b6a84 feat: Implement unsubscribe link for marketing automation [31188f42]
This commit introduces a new unsubscribe feature to allow contacts to opt-out
from marketing automation.

Key changes include:
- Database schema migration: Added  (UUID) to the  model.
- Data population: Implemented a script to assign unique tokens to existing contacts.
- API endpoint: Created a public GET  endpoint to handle opt-out requests.
- Automation: New contacts automatically receive an unsubscribe token upon creation.
- Integration: The full unsubscribe link is now returned via the provisioning API
  for storage in SuperOffice UDFs (ProgID: SuperOffice:9).
- Documentation: Updated  and
  to reflect the new feature and its integration requirements.
- Added  for quick overview and next steps.
2026-02-24 12:18:13 +00:00
0fd67ecc91 [31188f42] einfügen
einfügen
2026-02-24 08:40:38 +00:00
fa1ee24315 [2ff88f42] einfügen
einfügen
2026-02-24 07:13:49 +00:00
28 changed files with 799 additions and 77 deletions

View File

@@ -1 +1 @@
{"task_id": "31188f42-8544-80f0-b21a-c6beaa9ea3a1", "token": "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", "session_start_time": "2026-02-24T06:47:22.751414"} {"task_id": "31188f42-8544-80fa-8051-cef82ce7e4d3", "token": "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", "session_start_time": "2026-02-24T12:18:39.379752"}

View File

@@ -105,6 +105,17 @@ The system architecture has evolved from a CLI-based toolset to a modern web app
* **Problem:** Users didn't see when a background job finished. * **Problem:** Users didn't see when a background job finished.
* **Solution:** Implementing a polling mechanism (`setInterval`) tied to a `isProcessing` state is superior to static timeouts for long-running AI tasks. * **Solution:** Implementing a polling mechanism (`setInterval`) tied to a `isProcessing` state is superior to static timeouts for long-running AI tasks.
7. **Hyper-Personalized Marketing Engine (v3.2) - "Deep Persona Injection":**
* **Problem:** Marketing texts were too generic and didn't reflect the specific psychological or operative profile of the different target roles (e.g., CFO vs. Facility Manager).
* **Solution (Deep Sync & Prompt Hardening):**
1. **Extended Schema:** Added `description`, `convincing_arguments`, and `kpis` to the `Persona` database model to store richer profile data.
2. **Notion Master Sync:** Updated the synchronization logic to pull these deep insights directly from the Notion "Personas / Roles" database.
3. **Role-Centric Prompts:** The `MarketingMatrix` generator was re-engineered to inject the persona's "Mindset" and "KPIs" into the prompt.
* **Example (Healthcare):**
- **Infrastructure Lead:** Focuses now on "IT Security", "DSGVO Compliance", and "WLAN integration".
- **Economic Buyer (CFO):** Focuses on "ROI Amortization", "Reduction of Overtime", and "Flexible Financing (RaaS)".
* **Verification:** Verified that the transition from a company-specific **Opener** (e.g., observing staff shortages at Klinikum Erding) to the **Role-specific Intro** (e.g., pitching transport robots to reduce walking distances for nursing directors) is seamless and logical.
## Metric Parser - Regression Tests ## Metric Parser - Regression Tests
To ensure the stability and accuracy of the metric extraction logic, a dedicated test suite (`/company-explorer/backend/tests/test_metric_parser.py`) has been created. It covers the following critical, real-world bug fixes: To ensure the stability and accuracy of the metric extraction logic, a dedicated test suite (`/company-explorer/backend/tests/test_metric_parser.py`) has been created. It covers the following critical, real-world bug fixes:

View File

@@ -180,7 +180,21 @@ Contacts stehen in 1:n Beziehung zu Accounts. Accounts können einen "Primary Co
**Status (Marketing Automation):** **Status (Marketing Automation):**
* *Manuell:* Soft Denied, Bounced, Redirect, Interested, Hard denied. * *Manuell:* Soft Denied, Bounced, Redirect, Interested, Hard denied.
* *Automatisch:* Init, 1st Step, 2nd Step, Not replied. * *Automatisch:* Init, 1st Step, 2nd Step, Not replied, Unsubscribed.
### 6.1 Feature: Unsubscribe-Funktionalität (v2.1 - Feb 2026)
**Konzept:**
Um DSGVO-konforme Marketing-Automatisierung zu ermöglichen, wurde eine sichere Unsubscribe-Funktion implementiert.
**Technische Umsetzung:**
1. **Token:** Jeder Kontakt in der `contacts`-Tabelle erhält ein einzigartiges, nicht erratbares `unsubscribe_token` (UUID).
2. **Link-Generierung:** Der Company Explorer generiert einen vollständigen, personalisierten Link (z.B. `https://<APP_BASE_URL>/unsubscribe/<token>`).
3. **API-Endpunkt:** Ein öffentlicher GET-Endpunkt `/unsubscribe/{token}` nimmt Abmeldungen ohne Authentifizierung entgegen.
4. **Logik:**
* Bei Aufruf des Links wird der Status des zugehörigen Kontakts auf `"unsubscribed"` gesetzt.
* Der Benutzer erhält eine simple HTML-Bestätigungsseite.
5. **CRM-Integration:** Der generierte Link wird über die Provisioning-API an den `connector-superoffice` zurückgegeben, der ihn in ein entsprechendes UDF in SuperOffice schreibt.
## 7. Historie & Fixes (Jan 2026) ## 7. Historie & Fixes (Jan 2026)
@@ -339,6 +353,19 @@ PERSÖNLICHE HERAUSFORDERUNGEN: {persona_pains}
**Konzept:** Strikte Trennung zwischen `[Primary Product]` und `[Secondary Product]` zur Vermeidung logischer Brüche. **Konzept:** Strikte Trennung zwischen `[Primary Product]` und `[Secondary Product]` zur Vermeidung logischer Brüche.
### 17.9 Deep Persona Injection (Update Feb 24, 2026)
**Ziel:** Maximale Relevanz durch Einbezug psychografischer und operativer Rollen-Details ("Voll ins Zentrum").
**Die Erweiterung:**
- **Vollständiger Daten-Sync:** Übernahme von `Beschreibung/Denkweise`, `Was ihn überzeugt` und `KPIs` aus der Notion "Personas / Roles" Datenbank in das lokale Schema.
- **Rollenspezifische Tonalität:** Die KI nutzt diese Details, um den "Ton" der jeweiligen Persona perfekt zu treffen (z.B. technischer Fokus beim Infrastruktur-Leiter vs. betriebswirtschaftlicher Fokus beim CFO).
**Beispiel-Kaskade (Klinikum Erding):**
1. **Opener:** "Klinikum Erding trägt maßgeblich zur regionalen Versorgung bei... Dokumentation lückenloser Hygiene stellt eine operative Herausforderung dar."
2. **Matrix-Anschluss (Infrastruktur):** "...minimieren Ausfallzeiten um 80-90% durch proaktives Monitoring... planbare Wartung und Transparenz durch feste **SLAs**." (Direkter Bezug auf hinterlegte Überzeugungsargumente).
3. **Matrix-Anschluss (Wirtschaftlich):** "...Reduktion operativer Personalkosten um 10-25%... wirkt sich direkt auf **ROI** und **Amortisationszeit** aus." (Direkter Bezug auf hinterlegte KPIs).
--- ---
## 18. Next Steps & Todos (Post-Migration) ## 18. Next Steps & Todos (Post-Migration)

View File

@@ -63,6 +63,7 @@ Folgende Felder sollten am Objekt `Company` (bzw. `Contact` in SuperOffice-Termi
| `AI Summary` | Text (Long/Memo) | Kurze Zusammenfassung der Analyse | | `AI Summary` | Text (Long/Memo) | Kurze Zusammenfassung der Analyse |
| `AI Last Update` | Date | Zeitstempel der letzten Anreicherung | | `AI Last Update` | Date | Zeitstempel der letzten Anreicherung |
| `AI Status` | List/Select | Pending / Enriched / Error | | `AI Status` | List/Select | Pending / Enriched / Error |
| `Unsubscribe-Link` | Text/Link | **NEU:** Speichert den personalisierten Link zur Abmeldung von der Marketing-Automation. (ProgID: SuperOffice:9) |
### Benötigtes Feld im Company Explorer ### Benötigtes Feld im Company Explorer
| Feldname | Typ | Zweck | | Feldname | Typ | Zweck |

View File

@@ -0,0 +1,33 @@
# Abschluss des Features "Unsubscribe-Link"
## Zusammenfassung der Implementierung
In dieser Session wurde eine vollständige, sichere Unsubscribe-Funktion für die Marketing-Automation im `company-explorer` implementiert. Dies umfasst ein Datenbank-Update mit sicheren Tokens, einen öffentlichen API-Endpunkt zur Abmeldung und die Integration in den SuperOffice-Provisionierungsprozess.
## Nächste technische Schritte zur Inbetriebnahme
Um das Feature vollständig zu nutzen, sind die folgenden Schritte im **connector-superoffice** und der **Infrastruktur** notwendig:
1. **Konfiguration der `APP_BASE_URL`:**
* **Was?** In der Konfiguration des `company-explorer` (z.B. in einer `.env`-Datei oder direkt in der `docker-compose.yml`) muss die Umgebungsvariable `APP_BASE_URL` gesetzt werden.
* **Warum?** Diese URL ist die öffentliche Basis-Adresse, die für den Bau des Unsubscribe-Links verwendet wird (z.B. `APP_BASE_URL="https://www.ihre-domain.de"`).
* **Beispiel (in `docker-compose.yml`):**
```yaml
services:
company-explorer:
# ...
environment:
- APP_BASE_URL=https://www.robo-planet.de
# ...
```
2. **Anpassung des `connector-superoffice` Workers:**
* **Was?** Der Worker-Prozess im `connector-superoffice`, der die Daten vom `company-explorer` empfängt, muss angepasst werden. Er muss das neue Feld `unsubscribe_link` aus der API-Antwort auslesen.
* **Warum?** Aktuell kennt der Connector dieses Feld noch nicht und würde es ignorieren.
* **Wo?** In der Datei `connector-superoffice/worker.py` (oder ähnlich), in der Funktion, die die `/provision`-Antwort verarbeitet.
3. **Schreiben des Links in das SuperOffice UDF:**
* **Was?** Die Logik im `connector-superoffice` Worker, die Daten nach SuperOffice schreibt, muss erweitert werden. Der ausgelesene `unsubscribe_link` muss in das von Ihnen angelegte Textfeld mit der ProgID `SuperOffice:9` geschrieben werden.
* **Warum?** Nur so wird der Link im CRM gespeichert und kann in E-Mail-Vorlagen verwendet werden.
* **Wo?** An der Stelle, an der die `SuperOfficeAPI.update_person` (oder eine ähnliche Funktion) mit den UDF-Daten aufgerufen wird.
Nach diesen drei Schritten ist der gesamte Prozess von der Generierung des Links bis zur Speicherung im CRM und der Nutzung in E-Mails funktionsfähig.

16
check_erding_openers.py Normal file
View File

@@ -0,0 +1,16 @@
import sqlite3
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT name, ai_opener, ai_opener_secondary, industry_ai FROM companies WHERE name LIKE '%Erding%'")
row = cursor.fetchone()
if row:
print(f"Company: {row[0]}")
print(f"Industry: {row[3]}")
print(f"Opener Primary: {row[1]}")
print(f"Opener Secondary: {row[2]}")
else:
print("Company not found.")
conn.close()

16
check_klinikum_erding.py Normal file
View File

@@ -0,0 +1,16 @@
import sqlite3
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT name, ai_opener, ai_opener_secondary, industry_ai FROM companies WHERE name LIKE '%Klinikum Landkreis Erding%'")
row = cursor.fetchone()
if row:
print(f"Company: {row[0]}")
print(f"Industry: {row[3]}")
print(f"Opener Primary: {row[1]}")
print(f"Opener Secondary: {row[2]}")
else:
print("Company not found.")
conn.close()

23
check_matrix_indoor.py Normal file
View File

@@ -0,0 +1,23 @@
import sqlite3
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
query = """
SELECT i.name, p.name, m.subject, m.intro, m.social_proof
FROM marketing_matrix m
JOIN industries i ON m.industry_id = i.id
JOIN personas p ON m.persona_id = p.id
WHERE i.name = 'Leisure - Indoor Active'
"""
cursor.execute(query)
rows = cursor.fetchall()
for row in rows:
print(f"Industry: {row[0]} | Persona: {row[1]}")
print(f" Subject: {row[2]}")
print(f" Intro: {row[3]}")
print(f" Social Proof: {row[4]}")
print("-" * 50)
conn.close()

24
check_matrix_results.py Normal file
View File

@@ -0,0 +1,24 @@
import sqlite3
import json
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
query = """
SELECT i.name, p.name, m.subject, m.intro, m.social_proof
FROM marketing_matrix m
JOIN industries i ON m.industry_id = i.id
JOIN personas p ON m.persona_id = p.id
WHERE i.name = 'Healthcare - Hospital'
"""
cursor.execute(query)
rows = cursor.fetchall()
for row in rows:
print(f"Industry: {row[0]} | Persona: {row[1]}")
print(f" Subject: {row[2]}")
print(f" Intro: {row[3]}")
print(f" Social Proof: {row[4]}")
print("-" * 50)
conn.close()

View File

@@ -8,6 +8,7 @@ from pydantic import BaseModel
from datetime import datetime from datetime import datetime
import os import os
import sys import sys
import uuid
from fastapi.security import HTTPBasic, HTTPBasicCredentials from fastapi.security import HTTPBasic, HTTPBasicCredentials
import secrets import secrets
@@ -102,6 +103,7 @@ class ProvisioningResponse(BaseModel):
opener: Optional[str] = None # Primary opener (Infrastructure/Cleaning) opener: Optional[str] = None # Primary opener (Infrastructure/Cleaning)
opener_secondary: Optional[str] = None # Secondary opener (Service/Logistics) opener_secondary: Optional[str] = None # Secondary opener (Service/Logistics)
texts: Dict[str, Optional[str]] = {} texts: Dict[str, Optional[str]] = {}
unsubscribe_link: Optional[str] = None
# Enrichment Data for Write-Back # Enrichment Data for Write-Back
address_city: Optional[str] = None address_city: Optional[str] = None
@@ -205,7 +207,69 @@ def on_startup():
except Exception as e: except Exception as e:
logger.critical(f"Database init failed: {e}", exc_info=True) logger.critical(f"Database init failed: {e}", exc_info=True)
# --- Routes --- # --- Public Routes (No Auth) ---
from fastapi.responses import HTMLResponse
@app.get("/unsubscribe/{token}", response_class=HTMLResponse)
def unsubscribe_contact(token: str, db: Session = Depends(get_db)):
contact = db.query(Contact).filter(Contact.unsubscribe_token == token).first()
success_html = """
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8">
<title>Abmeldung erfolgreich</title>
<style>
body { font-family: sans-serif; text-align: center; padding: 40px; }
h1 { color: #333; }
</style>
</head>
<body>
<h1>Sie wurden erfolgreich abgemeldet.</h1>
<p>Sie werden keine weiteren Marketing-E-Mails von uns erhalten.</p>
</body>
</html>
"""
error_html = """
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8">
<title>Fehler bei der Abmeldung</title>
<style>
body { font-family: sans-serif; text-align: center; padding: 40px; }
h1 { color: #d9534f; }
</style>
</head>
<body>
<h1>Abmeldung fehlgeschlagen.</h1>
<p>Der von Ihnen verwendete Link ist ungültig oder abgelaufen. Bitte kontaktieren Sie uns bei Problemen direkt.</p>
</body>
</html>
"""
if not contact:
logger.warning(f"Unsubscribe attempt with invalid token: {token}")
return HTMLResponse(content=error_html, status_code=404)
if contact.status == "unsubscribed":
logger.info(f"Contact {contact.id} already unsubscribed, showing success page anyway.")
return HTMLResponse(content=success_html, status_code=200)
contact.status = "unsubscribed"
contact.updated_at = datetime.utcnow()
db.commit()
logger.info(f"Contact {contact.id} ({contact.email}) unsubscribed successfully via token.")
# Here you would trigger the sync back to SuperOffice in a background task
# background_tasks.add_task(sync_unsubscribe_to_superoffice, contact.id)
return HTMLResponse(content=success_html, status_code=200)
# --- API Routes ---
@app.get("/api/health") @app.get("/api/health")
def health_check(username: str = Depends(authenticate_user)): def health_check(username: str = Depends(authenticate_user)):
@@ -328,7 +392,8 @@ def provision_superoffice_contact(
company_id=company.id, company_id=company.id,
so_contact_id=req.so_contact_id, so_contact_id=req.so_contact_id,
so_person_id=req.so_person_id, so_person_id=req.so_person_id,
status="ACTIVE" status="ACTIVE",
unsubscribe_token=str(uuid.uuid4())
) )
db.add(person) db.add(person)
logger.info(f"Created new person {req.so_person_id} for company {company.name}") logger.info(f"Created new person {req.so_person_id} for company {company.name}")
@@ -376,6 +441,11 @@ def provision_superoffice_contact(
texts["intro"] = matrix_entry.intro texts["intro"] = matrix_entry.intro
texts["social_proof"] = matrix_entry.social_proof texts["social_proof"] = matrix_entry.social_proof
# 6. Construct Unsubscribe Link
unsubscribe_link = None
if person and person.unsubscribe_token:
unsubscribe_link = f"{settings.APP_BASE_URL.rstrip('/')}/unsubscribe/{person.unsubscribe_token}"
return ProvisioningResponse( return ProvisioningResponse(
status="success", status="success",
company_name=company.name, company_name=company.name,
@@ -385,6 +455,7 @@ def provision_superoffice_contact(
opener=company.ai_opener, opener=company.ai_opener,
opener_secondary=company.ai_opener_secondary, opener_secondary=company.ai_opener_secondary,
texts=texts, texts=texts,
unsubscribe_link=unsubscribe_link,
address_city=company.city, address_city=company.city,
address_street=company.street, address_street=company.street,
address_zip=company.zip_code, address_zip=company.zip_code,
@@ -1252,7 +1323,6 @@ def run_batch_classification_task():
# --- Serve Frontend --- # --- Serve Frontend ---
static_path = "/frontend_static" static_path = "/frontend_static"
if not os.path.exists(static_path): if not os.path.exists(static_path):
# Local dev fallback
static_path = os.path.join(os.path.dirname(__file__), "../../frontend/dist") static_path = os.path.join(os.path.dirname(__file__), "../../frontend/dist")
if not os.path.exists(static_path): if not os.path.exists(static_path):
static_path = os.path.join(os.path.dirname(__file__), "../static") static_path = os.path.join(os.path.dirname(__file__), "../static")
@@ -1260,11 +1330,34 @@ if not os.path.exists(static_path):
logger.info(f"Static files path: {static_path} (Exists: {os.path.exists(static_path)})") logger.info(f"Static files path: {static_path} (Exists: {os.path.exists(static_path)})")
if os.path.exists(static_path): if os.path.exists(static_path):
from fastapi.responses import FileResponse
from fastapi.staticfiles import StaticFiles
index_file = os.path.join(static_path, "index.html")
# Mount assets specifically first
assets_path = os.path.join(static_path, "assets")
if os.path.exists(assets_path):
app.mount("/assets", StaticFiles(directory=assets_path), name="assets")
@app.get("/") @app.get("/")
async def serve_index(): async def serve_index():
return FileResponse(os.path.join(static_path, "index.html")) return FileResponse(index_file)
app.mount("/", StaticFiles(directory=static_path, html=True), name="static") # Catch-all for SPA routing (any path not matched by API or assets)
@app.get("/{full_path:path}")
async def spa_fallback(full_path: str):
# Allow API calls to fail naturally with 404
if full_path.startswith("api/"):
raise HTTPException(status_code=404)
# If it's a file that exists, serve it (e.g. favicon, robots.txt)
file_path = os.path.join(static_path, full_path)
if os.path.isfile(file_path):
return FileResponse(file_path)
# Otherwise, serve index.html for SPA routing
return FileResponse(index_file)
else: else:
@app.get("/") @app.get("/")
def root_no_frontend(): def root_no_frontend():

View File

@@ -24,6 +24,9 @@ try:
# Paths # Paths
LOG_DIR: str = "/app/Log_from_docker" LOG_DIR: str = "/app/Log_from_docker"
# Public URL
APP_BASE_URL: str = "http://localhost:8090"
class Config: class Config:
env_file = ".env" env_file = ".env"
extra = 'ignore' extra = 'ignore'

View File

@@ -107,6 +107,9 @@ class Contact(Base):
role = Column(String) # Operativer Entscheider, etc. role = Column(String) # Operativer Entscheider, etc.
status = Column(String, default="") # Marketing Status status = Column(String, default="") # Marketing Status
# New field for unsubscribe functionality
unsubscribe_token = Column(String, unique=True, index=True, nullable=True)
is_primary = Column(Boolean, default=False) is_primary = Column(Boolean, default=False)
created_at = Column(DateTime, default=datetime.utcnow) created_at = Column(DateTime, default=datetime.utcnow)
@@ -205,8 +208,12 @@ class Persona(Base):
id = Column(Integer, primary_key=True, index=True) id = Column(Integer, primary_key=True, index=True)
name = Column(String, unique=True, index=True) # Matches the 'role' string in JobRolePattern name = Column(String, unique=True, index=True) # Matches the 'role' string in JobRolePattern
description = Column(Text, nullable=True) # NEW: Role description / how they think
pains = Column(Text, nullable=True) # JSON list or multiline string pains = Column(Text, nullable=True) # JSON list or multiline string
gains = Column(Text, nullable=True) # JSON list or multiline string gains = Column(Text, nullable=True) # JSON list or multiline string
convincing_arguments = Column(Text, nullable=True) # NEW: What convinces them
typical_positions = Column(Text, nullable=True) # NEW: Typical titles
kpis = Column(Text, nullable=True) # NEW: Relevant KPIs
created_at = Column(DateTime, default=datetime.utcnow) created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)

View File

@@ -0,0 +1,44 @@
import uuid
import os
import sys
# This is the crucial part to fix the import error.
# We add the 'company-explorer' directory to the path, so imports can be absolute
# from the 'backend' module.
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from backend.database import Contact, SessionLocal
def migrate_existing_contacts():
"""
Generates and adds an unsubscribe_token for all existing contacts
that do not have one yet.
"""
db = SessionLocal()
try:
contacts_to_update = db.query(Contact).filter(Contact.unsubscribe_token == None).all()
if not contacts_to_update:
print("All contacts already have an unsubscribe token. No migration needed.")
return
print(f"Found {len(contacts_to_update)} contacts without an unsubscribe token. Generating tokens...")
for contact in contacts_to_update:
token = str(uuid.uuid4())
contact.unsubscribe_token = token
print(f" - Generated token for contact ID {contact.id} ({contact.email})")
db.commit()
print("\nSuccessfully updated all contacts with new unsubscribe tokens.")
except Exception as e:
print(f"An error occurred: {e}")
db.rollback()
finally:
db.close()
if __name__ == "__main__":
print("Starting migration: Populating unsubscribe_token for existing contacts.")
migrate_existing_contacts()
print("Migration finished.")

View File

@@ -93,9 +93,16 @@ def generate_prompt(industry: Industry, persona: Persona) -> str:
persona_pains = [persona.pains] if persona.pains else [] persona_pains = [persona.pains] if persona.pains else []
persona_gains = [persona.gains] if persona.gains else [] persona_gains = [persona.gains] if persona.gains else []
# Advanced Persona Context
persona_context = f"""
BESCHREIBUNG/DENKWEISE: {persona.description or 'Nicht definiert'}
WAS DIESE PERSON ÜBERZEUGT: {persona.convincing_arguments or 'Nicht definiert'}
RELEVANTE KPIs: {persona.kpis or 'Nicht definiert'}
"""
prompt = f""" prompt = f"""
Du bist ein kompetenter Lösungsberater und brillanter Texter. Du bist ein kompetenter Lösungsberater und brillanter Texter für B2B-Marketing.
AUFGABE: Erstelle 3 Textblöcke (Subject, Introduction_Textonly, Industry_References_Textonly) für eine E-Mail an einen Entscheider. AUFGABE: Erstelle 3 hoch-personalisierte Textblöcke (Subject, Introduction_Textonly, Industry_References_Textonly) für eine E-Mail an einen Entscheider.
--- KONTEXT --- --- KONTEXT ---
ZIELBRANCHE: {industry.name} ZIELBRANCHE: {industry.name}
@@ -106,20 +113,27 @@ FOKUS-PRODUKT (LÖSUNG):
{product_context} {product_context}
ANSPRECHPARTNER (ROLLE): {persona.name} ANSPRECHPARTNER (ROLLE): {persona.name}
PERSÖNLICHE HERAUSFORDERUNGEN DES ANSPRECHPARTNERS (PAIN POINTS): {persona_context}
SPEZIFISCHE HERAUSFORDERUNGEN (PAIN POINTS) DER ROLLE:
{chr(10).join(['- ' + str(p) for p in persona_pains])} {chr(10).join(['- ' + str(p) for p in persona_pains])}
SPEZIFISCHE NUTZEN (GAINS) DER ROLLE:
{chr(10).join(['- ' + str(g) for g in persona_gains])}
--- DEINE AUFGABE --- --- DEINE AUFGABE ---
Deine Texte müssen "voll ins Zentrum" der Rolle treffen. Vermeide oberflächliche Floskeln. Nutze die Details zur Denkweise, den KPIs und den Überzeugungsargumenten, um eine tiefgreifende Relevanz zu erzeugen.
1. **Subject:** Formuliere eine kurze Betreffzeile (max. 6 Wörter). Richte sie **direkt an einem der persönlichen Pain Points** des Ansprechpartners oder dem zentralen Branchen-Pain. Sei scharfsinnig, nicht werblich. 1. **Subject:** Formuliere eine kurze Betreffzeile (max. 6 Wörter). Richte sie **direkt an einem der persönlichen Pain Points** des Ansprechpartners oder dem zentralen Branchen-Pain. Sei scharfsinnig, nicht werblich.
2. **Introduction_Textonly:** Formuliere einen prägnanten Einleitungstext (max. 2 Sätze). 2. **Introduction_Textonly:** Formuliere einen prägnanten Einleitungstext (max. 2 Sätze).
- **WICHTIG:** Gehe davon aus, dass die spezifische Herausforderung des Kunden bereits im Satz davor [Opener] genannt wurde. **Wiederhole die Herausforderung NICHT.** - **WICHTIG:** Gehe davon aus, dass die spezifische Herausforderung des Kunden bereits im Satz davor [Opener] genannt wurde. **Wiederhole die Herausforderung NICHT.**
- **Satz 1 (Die Lösung & der Gain):** Beginne direkt mit der Lösung. Nenne die im Kontext `FOKUS-PRODUKT` definierte **Produktkategorie** (z.B. "automatisierte Reinigungsroboter") und verbinde sie mit dem zentralen Nutzen (Gain) aus den `BRANCHEN-HERAUSFORDERUNGEN`. Beispiel: "Genau hier setzen unsere automatisierten Reinigungsroboter an, indem sie eine lückenlose und auditsichere Hygiene gewährleisten." - **Satz 1 (Die Lösung & der Gain):** Beginne direkt mit der Lösung. Nenne die im Kontext `FOKUS-PRODUKT` definierte **Produktkategorie** (z.B. "automatisierte Reinigungsroboter") und verbinde sie mit einem Nutzen, der für diese Rolle (siehe `WAS DIESE PERSON ÜBERZEUGT` und `GAINS`) besonders kritisch ist.
- **Satz 2 (Die Relevanz):** Stelle die Relevanz für die Zielperson her, indem du einen ihrer `PERSÖNLICHE HERAUSFORDERUNGEN` adressierst. Beispiel: "Für Sie als Infrastruktur-Verantwortlicher bedeutet dies vor allem eine reibungslose Integration in bestehende Abläufe, ohne den Betrieb zu stören." - **Satz 2 (Die Relevanz):** Stelle die Relevanz für die Zielperson her, indem du eine ihrer `PERSÖNLICHE HERAUSFORDERUNGEN` oder `KPIs` adressierst. Beispiel: "Für Sie als [Rolle] bedeutet dies vor allem [Nutzen bezogen auf KPI oder Pain]."
3. **Industry_References_Textonly:** Formuliere einen **strategischen Referenz-Block (ca. 2-3 Sätze)** nach folgendem Muster: 3. **Industry_References_Textonly:** Formuliere einen **strategischen Referenz-Block (ca. 2-3 Sätze)** nach folgendem Muster:
- **Satz 1 (Social Proof):** Beginne direkt mit dem Nutzen, den vergleichbare Unternehmen in der Branche {industry.name} bereits erzielen. (Erfinde keine Firmennamen, sprich von "Führenden Einrichtungen" oder "Vergleichbaren Häusern"). - **Satz 1 (Social Proof):** Beginne direkt mit dem Nutzen, den vergleichbare Unternehmen in der Branche {industry.name} bereits erzielen. (Erfinde keine Firmennamen, sprich von "Führenden Einrichtungen" oder "Vergleichbaren Häusern").
- **Satz 2 (Rollen-Relevanz):** Schaffe den direkten Nutzen für die Zielperson. Formuliere z.B. 'Dieser Wissensvorsprung hilft uns, Ihre [persönlicher Pain Point der Rolle] besonders effizient zu lösen.' - **Satz 2 (Rollen-Relevanz):** Schaffe den direkten Nutzen für die Zielperson. Nutze dabei die Informationen aus `BESCHREIBUNG/DENKWEISE`, um den Ton perfekt zu treffen.
--- BEISPIEL FÜR EINEN PERFEKTEN OUTPUT --- --- BEISPIEL FÜR EINEN PERFEKTEN OUTPUT ---
{{ {{

View File

@@ -89,6 +89,17 @@ def migrate_tables():
""") """)
logger.info("Table 'reported_mistakes' ensured to exist.") logger.info("Table 'reported_mistakes' ensured to exist.")
# 4. Update CONTACTS Table (Two-step for SQLite compatibility)
logger.info("Checking 'contacts' table schema for unsubscribe_token...")
contacts_columns = get_table_columns(cursor, "contacts")
if 'unsubscribe_token' not in contacts_columns:
logger.info("Adding column 'unsubscribe_token' to 'contacts' table...")
cursor.execute("ALTER TABLE contacts ADD COLUMN unsubscribe_token TEXT")
logger.info("Creating UNIQUE index on 'unsubscribe_token' column...")
cursor.execute("CREATE UNIQUE INDEX IF NOT EXISTS idx_contacts_unsubscribe_token ON contacts (unsubscribe_token)")
conn.commit() conn.commit()
logger.info("All migrations completed successfully.") logger.info("All migrations completed successfully.")

View File

@@ -16,13 +16,14 @@ logger = logging.getLogger(__name__)
NOTION_TOKEN_FILE = "/app/notion_token.txt" NOTION_TOKEN_FILE = "/app/notion_token.txt"
# Sector & Persona Master DB # Sector & Persona Master DB
PERSONAS_DB_ID = "2e288f42-8544-8113-b878-ec99c8a02a6b" PERSONAS_DB_ID = "30588f42-8544-80c3-8919-e22d74d945ea"
VALID_ARCHETYPES = { VALID_ARCHETYPES = {
"Wirtschaftlicher Entscheider", "Wirtschaftlicher Entscheider",
"Operativer Entscheider", "Operativer Entscheider",
"Infrastruktur-Verantwortlicher", "Infrastruktur-Verantwortlicher",
"Innovations-Treiber" "Innovations-Treiber",
"Influencer"
} }
def load_notion_token(): def load_notion_token():
@@ -65,6 +66,10 @@ def extract_title(prop):
if not prop: return "" if not prop: return ""
return "".join([t.get("plain_text", "") for t in prop.get("title", [])]) return "".join([t.get("plain_text", "") for t in prop.get("title", [])])
def extract_rich_text(prop):
if not prop: return ""
return "".join([t.get("plain_text", "") for t in prop.get("rich_text", [])])
def extract_rich_text_to_list(prop): def extract_rich_text_to_list(prop):
""" """
Extracts rich text and converts bullet points/newlines into a list of strings. Extracts rich text and converts bullet points/newlines into a list of strings.
@@ -94,7 +99,8 @@ def sync_personas(token, session):
for page in pages: for page in pages:
props = page.get("properties", {}) props = page.get("properties", {})
name = extract_title(props.get("Name")) # The title property is 'Role' in the new DB, not 'Name'
name = extract_title(props.get("Role"))
if name not in VALID_ARCHETYPES: if name not in VALID_ARCHETYPES:
logger.debug(f"Skipping '{name}' (Not a target Archetype)") logger.debug(f"Skipping '{name}' (Not a target Archetype)")
@@ -105,6 +111,11 @@ def sync_personas(token, session):
pains_list = extract_rich_text_to_list(props.get("Pains")) pains_list = extract_rich_text_to_list(props.get("Pains"))
gains_list = extract_rich_text_to_list(props.get("Gains")) gains_list = extract_rich_text_to_list(props.get("Gains"))
description = extract_rich_text(props.get("Rollenbeschreibung"))
convincing_arguments = extract_rich_text(props.get("Was ihn überzeugt"))
typical_positions = extract_rich_text(props.get("Typische Positionen"))
kpis = extract_rich_text(props.get("KPIs"))
# Upsert Logic # Upsert Logic
persona = session.query(Persona).filter(Persona.name == name).first() persona = session.query(Persona).filter(Persona.name == name).first()
if not persona: if not persona:
@@ -116,6 +127,10 @@ def sync_personas(token, session):
persona.pains = json.dumps(pains_list, ensure_ascii=False) persona.pains = json.dumps(pains_list, ensure_ascii=False)
persona.gains = json.dumps(gains_list, ensure_ascii=False) persona.gains = json.dumps(gains_list, ensure_ascii=False)
persona.description = description
persona.convincing_arguments = convincing_arguments
persona.typical_positions = typical_positions
persona.kpis = kpis
count += 1 count += 1

View File

@@ -4,7 +4,7 @@ import { ContactsTable } from './components/ContactsTable' // NEW
import { ImportWizard } from './components/ImportWizard' import { ImportWizard } from './components/ImportWizard'
import { Inspector } from './components/Inspector' import { Inspector } from './components/Inspector'
import { RoboticsSettings } from './components/RoboticsSettings' import { RoboticsSettings } from './components/RoboticsSettings'
import { LayoutDashboard, UploadCloud, RefreshCw, Settings, Users, Building, Sun, Moon } from 'lucide-react' import { LayoutDashboard, UploadCloud, RefreshCw, Settings, Users, Building, Sun, Moon, Activity } from 'lucide-react'
import clsx from 'clsx' import clsx from 'clsx'
// Base URL detection (Production vs Dev) // Base URL detection (Production vs Dev)
@@ -119,6 +119,16 @@ function App() {
{theme === 'dark' ? <Sun className="h-5 w-5" /> : <Moon className="h-5 w-5" />} {theme === 'dark' ? <Sun className="h-5 w-5" /> : <Moon className="h-5 w-5" />}
</button> </button>
<a
href="/connector/dashboard"
target="_blank"
rel="noopener noreferrer"
className="p-2 hover:bg-slate-100 dark:hover:bg-slate-800 rounded-full transition-colors text-slate-500 dark:text-slate-400"
title="Connector Status Dashboard"
>
<Activity className="h-5 w-5" />
</a>
<button <button
onClick={() => setIsSettingsOpen(true)} onClick={() => setIsSettingsOpen(true)}
className="p-2 hover:bg-slate-100 dark:hover:bg-slate-800 rounded-full transition-colors text-slate-500 dark:text-slate-400" className="p-2 hover:bg-slate-100 dark:hover:bg-slate-800 rounded-full transition-colors text-slate-500 dark:text-slate-400"

View File

@@ -97,10 +97,31 @@ Der Connector ist der Bote, der diese Daten in das CRM bringt.
* `UDF_Bridge` * `UDF_Bridge`
* `UDF_Proof` * `UDF_Proof`
* `UDF_Subject` * `UDF_Subject`
* `UDF_UnsubscribeLink`
--- ---
## 6. Setup & Wartung ### 6. Monitoring & Dashboard ("The Eyes")
Das System verfügt über ein integriertes Echtzeit-Dashboard zur Überwachung der Synchronisationsprozesse.
**Features:**
* **Account-basierte Ansicht:** Gruppiert alle Ereignisse nach SuperOffice-Account oder Person, um den aktuellen Status pro Datensatz zu zeigen.
* **Phasen-Visualisierung:** Stellt den Fortschritt in vier Phasen dar:
1. **Received:** Webhook erfolgreich empfangen.
2. **Enriching:** Datenanreicherung im Company Explorer läuft (Gelb blinkend = In Arbeit).
3. **Syncing:** Rückschreiben der Daten nach SuperOffice (Gelb blinkend = In Arbeit).
4. **Completed:** Prozess für diesen Kontakt erfolgreich abgeschlossen (Grün).
* **Performance-Tracking:** Anzeige der Gesamtdurchlaufzeit (Duration) pro Prozess.
* **Fehler-Analyse:** Detaillierte Fehlermeldungen direkt in der Übersicht.
* **Dark Mode:** Modernes UI-Design für Admin-Monitoring.
**Zugriff:**
Das Dashboard ist über das Company Explorer Frontend (Icon "Activity" im Header) oder direkt unter `/connector/dashboard` erreichbar.
---
## 7. Setup & Wartung
### Neue Branche hinzufügen ### Neue Branche hinzufügen
1. In **Notion** anlegen (Pains/Gains/Produkte definieren). 1. In **Notion** anlegen (Pains/Gains/Produkte definieren).

View File

@@ -77,13 +77,19 @@ class JobQueue:
return job return job
def retry_job_later(self, job_id, delay_seconds=60): def retry_job_later(self, job_id, delay_seconds=60, error_msg=None):
next_try = datetime.utcnow() + timedelta(seconds=delay_seconds) next_try = datetime.utcnow() + timedelta(seconds=delay_seconds)
with sqlite3.connect(DB_PATH) as conn: with sqlite3.connect(DB_PATH) as conn:
conn.execute( if error_msg:
"UPDATE jobs SET status = 'PENDING', next_try_at = ?, updated_at = datetime('now') WHERE id = ?", conn.execute(
(next_try, job_id) "UPDATE jobs SET status = 'PENDING', next_try_at = ?, updated_at = datetime('now'), error_msg = ? WHERE id = ?",
) (next_try, str(error_msg), job_id)
)
else:
conn.execute(
"UPDATE jobs SET status = 'PENDING', next_try_at = ?, updated_at = datetime('now') WHERE id = ?",
(next_try, job_id)
)
def complete_job(self, job_id): def complete_job(self, job_id):
with sqlite3.connect(DB_PATH) as conn: with sqlite3.connect(DB_PATH) as conn:
@@ -125,3 +131,113 @@ class JobQueue:
pass pass
results.append(r) results.append(r)
return results return results
def get_account_summary(self, limit=1000):
"""
Groups recent jobs by ContactId/PersonId and returns a summary status.
"""
jobs = self.get_recent_jobs(limit=limit)
accounts = {}
for job in jobs:
payload = job.get('payload', {})
# Try to find IDs
c_id = payload.get('ContactId')
p_id = payload.get('PersonId')
# Fallback for cascaded jobs or primary keys
if not c_id and payload.get('PrimaryKey') and 'contact' in job['event_type'].lower():
c_id = payload.get('PrimaryKey')
if not p_id and payload.get('PrimaryKey') and 'person' in job['event_type'].lower():
p_id = payload.get('PrimaryKey')
if not c_id and not p_id:
continue
# Create a unique key for the entity
key = f"P{p_id}" if p_id else f"C{c_id}"
if key not in accounts:
accounts[key] = {
"id": key,
"contact_id": c_id,
"person_id": p_id,
"name": "Unknown",
"last_event": job['event_type'],
"status": job['status'],
"created_at": job['created_at'], # Oldest job in group (since we sort by DESC)
"updated_at": job['updated_at'], # Most recent job
"error_msg": job['error_msg'],
"job_count": 0,
"duration": "0s",
"phases": {
"received": "completed",
"enriching": "pending",
"syncing": "pending",
"completed": "pending"
}
}
acc = accounts[key]
acc["job_count"] += 1
# Update duration
try:
# We want the absolute start (oldest created_at)
# Since jobs are DESC, the last one we iterate through for a key is the oldest
acc["created_at"] = job["created_at"]
start = datetime.strptime(acc["created_at"], "%Y-%m-%d %H:%M:%S")
end = datetime.strptime(acc["updated_at"], "%Y-%m-%d %H:%M:%S")
diff = end - start
seconds = int(diff.total_seconds())
if seconds < 60:
acc["duration"] = f"{seconds}s"
else:
acc["duration"] = f"{seconds // 60}m {seconds % 60}s"
except Exception:
pass
# Try to resolve 'Unknown' name from any job in the group
if acc["name"] == "Unknown":
name = payload.get('Name') or payload.get('crm_name') or payload.get('FullName') or payload.get('ContactName')
if not name and payload.get('Firstname'):
name = f"{payload.get('Firstname')} {payload.get('Lastname', '')}".strip()
if name:
acc["name"] = name
# Update overall status based on most recent job
# (Assuming jobs are sorted by updated_at DESC)
if acc["job_count"] == 1:
acc["status"] = job["status"]
acc["updated_at"] = job["updated_at"]
acc["error_msg"] = job["error_msg"]
# Determine Phase
if job["status"] == "COMPLETED":
acc["phases"] = {
"received": "completed",
"enriching": "completed",
"syncing": "completed",
"completed": "completed"
}
elif job["status"] == "FAILED":
acc["phases"]["received"] = "completed"
acc["phases"]["enriching"] = "failed"
elif job["status"] == "PROCESSING":
acc["phases"]["received"] = "completed"
acc["phases"]["enriching"] = "processing"
elif job["status"] == "PENDING":
acc["phases"]["received"] = "completed"
# If it has an error msg like 'processing', it's in enriching
if job["error_msg"] and "processing" in job["error_msg"].lower():
acc["phases"]["enriching"] = "processing"
else:
acc["phases"]["received"] = "processing"
# Final cleanup for names
for acc in accounts.values():
if acc["name"] == "Unknown":
acc["name"] = f"Entity {acc['id']}"
return list(accounts.values())

View File

@@ -56,6 +56,10 @@ def stats():
def get_jobs(): def get_jobs():
return queue.get_recent_jobs(limit=100) return queue.get_recent_jobs(limit=100)
@app.get("/api/accounts")
def get_accounts():
return queue.get_account_summary(limit=500)
@app.get("/dashboard", response_class=HTMLResponse) @app.get("/dashboard", response_class=HTMLResponse)
def dashboard(): def dashboard():
html_content = """ html_content = """
@@ -63,70 +67,188 @@ def dashboard():
<html> <html>
<head> <head>
<title>Connector Dashboard</title> <title>Connector Dashboard</title>
<meta http-equiv="refresh" content="5"> <meta http-equiv="refresh" content="30">
<style> <style>
body { font-family: sans-serif; padding: 20px; background: #f0f2f5; } body {
.container { max-width: 1200px; margin: 0 auto; background: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); } font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
h1 { color: #333; } padding: 20px;
table { width: 100%; border-collapse: collapse; margin-top: 20px; } background: #0f172a;
th, td { text-align: left; padding: 12px; border-bottom: 1px solid #ddd; font-size: 14px; } color: #f1f5f9;
th { background-color: #f8f9fa; color: #666; font-weight: 600; } }
tr:hover { background-color: #f8f9fa; } .container {
.status { padding: 4px 8px; border-radius: 4px; font-size: 12px; font-weight: bold; text-transform: uppercase; } max-width: 1200px;
.status-PENDING { background: #e2e8f0; color: #475569; } margin: 0 auto;
.status-PROCESSING { background: #dbeafe; color: #1e40af; } background: #1e293b;
.status-COMPLETED { background: #dcfce7; color: #166534; } padding: 24px;
.status-FAILED { background: #fee2e2; color: #991b1b; } border-radius: 12px;
.status-RETRY { background: #fef9c3; color: #854d0e; } box-shadow: 0 10px 15px -3px rgba(0, 0, 0, 0.3);
.meta { color: #888; font-size: 12px; } border: 1px solid #334155;
pre { margin: 0; white-space: pre-wrap; word-break: break-word; color: #444; font-family: monospace; font-size: 11px; max-height: 60px; overflow-y: auto; } }
header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 24px; }
h1 { margin: 0; font-size: 24px; color: #f8fafc; }
.tabs { display: flex; gap: 8px; margin-bottom: 20px; border-bottom: 1px solid #334155; padding-bottom: 10px; }
.tab { padding: 8px 16px; cursor: pointer; border-radius: 6px; font-weight: 500; font-size: 14px; color: #94a3b8; transition: all 0.2s; }
.tab:hover { background: #334155; color: #f8fafc; }
.tab.active { background: #3b82f6; color: white; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; padding: 14px; border-bottom: 1px solid #334155; font-size: 14px; }
th { background-color: #1e293b; color: #94a3b8; font-weight: 600; text-transform: uppercase; font-size: 12px; letter-spacing: 0.5px; }
tr:hover { background-color: #334155; }
.status { padding: 4px 8px; border-radius: 6px; font-size: 11px; font-weight: 700; text-transform: uppercase; }
.status-PENDING { background: #334155; color: #cbd5e1; }
.status-PROCESSING { background: #1e40af; color: #bfdbfe; }
.status-COMPLETED { background: #064e3b; color: #a7f3d0; }
.status-FAILED { background: #7f1d1d; color: #fecaca; }
.phases { display: flex; gap: 4px; align-items: center; }
.phase { width: 12px; height: 12px; border-radius: 50%; background: #334155; border: 2px solid #1e293b; box-shadow: 0 0 0 1px #334155; }
.phase.completed { background: #10b981; box-shadow: 0 0 0 1px #10b981; }
.phase.processing { background: #f59e0b; box-shadow: 0 0 0 1px #f59e0b; animation: pulse 1.5s infinite; }
.phase.failed { background: #ef4444; box-shadow: 0 0 0 1px #ef4444; }
@keyframes pulse { 0% { opacity: 1; } 50% { opacity: 0.4; } 100% { opacity: 1; } }
.meta { color: #94a3b8; font-size: 12px; display: block; margin-top: 4px; }
pre {
margin: 0;
white-space: pre-wrap;
word-break: break-word;
color: #cbd5e1;
font-family: 'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, monospace;
font-size: 11px;
max-height: 80px;
overflow-y: auto;
background: #0f172a;
padding: 10px;
border-radius: 6px;
border: 1px solid #334155;
}
.hidden { display: none; }
</style> </style>
</head> </head>
<body> <body>
<div class="container"> <div class="container">
<div style="display: flex; justify-content: space-between; align-items: center;"> <header>
<h1>🔌 SuperOffice Connector Dashboard</h1> <h1>🔌 SuperOffice Connector Dashboard</h1>
<div id="stats"></div> <div id="stats"></div>
</header>
<div class="tabs">
<div class="tab active" id="tab-accounts" onclick="switchTab('accounts')">Account View</div>
<div class="tab" id="tab-events" onclick="switchTab('events')">Event Log</div>
</div> </div>
<table> <div id="view-accounts">
<thead> <table>
<tr> <thead>
<th width="50">ID</th> <tr>
<th width="120">Status</th> <th>Account / Person</th>
<th width="150">Updated</th> <th width="120">ID</th>
<th width="150">Event</th> <th width="150">Process Progress</th>
<th>Payload / Error</th> <th width="100">Duration</th>
</tr> <th width="120">Status</th>
</thead> <th width="150">Last Update</th>
<tbody id="job-table"> <th>Details</th>
<tr><td colspan="5" style="text-align:center;">Loading...</td></tr> </tr>
</tbody> </thead>
</table> <tbody id="account-table">
<tr><td colspan="6" style="text-align:center;">Loading Accounts...</td></tr>
</tbody>
</table>
</div>
<div id="view-events" class="hidden">
<table>
<thead>
<tr>
<th width="50">ID</th>
<th width="120">Status</th>
<th width="150">Updated</th>
<th width="150">Event</th>
<th>Payload / Error</th>
</tr>
</thead>
<tbody id="event-table">
<tr><td colspan="5" style="text-align:center;">Loading Events...</td></tr>
</tbody>
</table>
</div>
</div> </div>
<script> <script>
let currentTab = 'accounts';
function switchTab(tab) {
currentTab = tab;
document.getElementById('tab-accounts').classList.toggle('active', tab === 'accounts');
document.getElementById('tab-events').classList.toggle('active', tab === 'events');
document.getElementById('view-accounts').classList.toggle('hidden', tab !== 'accounts');
document.getElementById('view-events').classList.toggle('hidden', tab !== 'events');
loadData();
}
async function loadData() { async function loadData() {
if (currentTab === 'accounts') await loadAccounts();
else await loadEvents();
}
async function loadAccounts() {
try { try {
// Use relative path to work behind Nginx /connector/ prefix const response = await fetch('api/accounts');
const response = await fetch('api/jobs'); const accounts = await response.json();
const jobs = await response.json(); const tbody = document.getElementById('account-table');
const tbody = document.getElementById('job-table');
tbody.innerHTML = ''; tbody.innerHTML = '';
if (jobs.length === 0) { if (accounts.length === 0) {
tbody.innerHTML = '<tr><td colspan="5" style="text-align:center;">No jobs found</td></tr>'; tbody.innerHTML = '<tr><td colspan="6" style="text-align:center;">No accounts in process</td></tr>';
return; return;
} }
accounts.sort((a,b) => new Date(b.updated_at) - new Date(a.updated_at));
accounts.forEach(acc => {
const tr = document.createElement('tr');
const phasesHtml = `
<div class="phases">
<div class="phase ${acc.phases.received}" title="Received"></div>
<div class="phase ${acc.phases.enriching}" title="Enriching (CE)"></div>
<div class="phase ${acc.phases.syncing}" title="Syncing (SO)"></div>
<div class="phase ${acc.phases.completed}" title="Completed"></div>
</div>
`;
tr.innerHTML = `
<td>
<strong>${acc.name}</strong>
<span class="meta">${acc.last_event}</span>
</td>
<td>${acc.id}</td>
<td>${phasesHtml}</td>
<td><span class="meta">${acc.duration || '0s'}</span></td>
<td><span class="status status-${acc.status}">${acc.status}</span></td>
<td>${new Date(acc.updated_at + "Z").toLocaleTimeString()}</td>
<td><pre>${acc.error_msg || 'No issues'}</pre></td>
`;
tbody.appendChild(tr);
});
} catch (e) { console.error("Failed to load accounts", e); }
}
async function loadEvents() {
try {
const response = await fetch('api/jobs');
const jobs = await response.json();
const tbody = document.getElementById('event-table');
tbody.innerHTML = '';
jobs.forEach(job => { jobs.forEach(job => {
const tr = document.createElement('tr'); const tr = document.createElement('tr');
let details = JSON.stringify(job.payload, null, 2); let details = JSON.stringify(job.payload, null, 2);
if (job.error_msg) { if (job.error_msg) details += "\\n\\n🔴 ERROR: " + job.error_msg;
details += "\\n\\n🔴 ERROR: " + job.error_msg;
}
tr.innerHTML = ` tr.innerHTML = `
<td>#${job.id}</td> <td>#${job.id}</td>
@@ -137,13 +259,11 @@ def dashboard():
`; `;
tbody.appendChild(tr); tbody.appendChild(tr);
}); });
} catch (e) { } catch (e) { console.error("Failed to load events", e); }
console.error("Failed to load jobs", e);
}
} }
loadData(); loadData();
// Also handled by meta refresh, but JS refresh is smoother if we want to remove meta refresh setInterval(loadData, 5000);
</script> </script>
</body> </body>
</html> </html>

View File

@@ -391,7 +391,9 @@ def run_worker():
try: try:
result = process_job(job, so_client) result = process_job(job, so_client)
if result == "RETRY": if result == "RETRY":
queue.retry_job_later(job['id'], delay_seconds=120) queue.retry_job_later(job['id'], delay_seconds=120, error_msg="CE is processing...")
elif result == "FAILED":
queue.fail_job(job['id'], "Job failed with FAILED status")
else: else:
queue.complete_job(job['id']) queue.complete_job(job['id'])
except Exception as e: except Exception as e:

13
debug_paths.py Normal file
View File

@@ -0,0 +1,13 @@
import os
static_path = "/frontend_static"
print(f"Path {static_path} exists: {os.path.exists(static_path)}")
if os.path.exists(static_path):
for root, dirs, files in os.walk(static_path):
for file in files:
print(os.path.join(root, file))
else:
print("Listing /app instead:")
for root, dirs, files in os.walk("/app"):
if "node_modules" in root: continue
for file in files:
print(os.path.join(root, file))

24
inspect_persona_db.py Normal file
View File

@@ -0,0 +1,24 @@
import sys
import os
import requests
import json
NOTION_TOKEN_FILE = "/app/notion_token.txt"
PERSONAS_DB_ID = "2e288f42-8544-8113-b878-ec99c8a02a6b"
def load_notion_token():
with open(NOTION_TOKEN_FILE, "r") as f:
return f.read().strip()
def query_notion_db(token, db_id):
url = f"https://api.notion.com/v1/databases/{db_id}/query"
headers = {
"Authorization": f"Bearer {token}",
"Notion-Version": "2022-06-28"
}
response = requests.post(url, headers=headers)
return response.json()
token = load_notion_token()
data = query_notion_db(token, PERSONAS_DB_ID)
print(json.dumps(data.get("results", [])[0], indent=2))

30
inspect_persona_db_v2.py Normal file
View File

@@ -0,0 +1,30 @@
import sys
import os
import requests
import json
NOTION_TOKEN_FILE = "/app/notion_token.txt"
PERSONAS_DB_ID = "30588f42-8544-80c3-8919-e22d74d945ea"
def load_notion_token():
with open(NOTION_TOKEN_FILE, "r") as f:
return f.read().strip()
def query_notion_db(token, db_id):
url = f"https://api.notion.com/v1/databases/{db_id}/query"
headers = {
"Authorization": f"Bearer {token}",
"Notion-Version": "2022-06-28"
}
response = requests.post(url, headers=headers)
return response.json()
token = load_notion_token()
data = query_notion_db(token, PERSONAS_DB_ID)
results = data.get("results", [])
for res in results:
props = res.get("properties", {})
role = "".join([t.get("plain_text", "") for t in props.get("Role", {}).get("title", [])])
print(f"Role: {role}")
print(json.dumps(props, indent=2))
print("-" * 40)

12
list_industries_db.py Normal file
View File

@@ -0,0 +1,12 @@
import sqlite3
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT name FROM industries")
industries = cursor.fetchall()
print("Available Industries:")
for ind in industries:
print(f"- {ind[0]}")
conn.close()

30
migrate_personas_v2.py Normal file
View File

@@ -0,0 +1,30 @@
import sqlite3
import os
DB_PATH = "/app/companies_v3_fixed_2.db"
def migrate_personas():
print(f"Adding new columns to 'personas' table in {DB_PATH}...")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
columns_to_add = [
("description", "TEXT"),
("convincing_arguments", "TEXT"),
("typical_positions", "TEXT"),
("kpis", "TEXT")
]
for col_name, col_type in columns_to_add:
try:
cursor.execute(f"ALTER TABLE personas ADD COLUMN {col_name} {col_type}")
print(f" Added column: {col_name}")
except sqlite3.OperationalError:
print(f" Column {col_name} already exists.")
conn.commit()
conn.close()
print("Migration complete.")
if __name__ == "__main__":
migrate_personas()

View File

@@ -169,13 +169,6 @@ http {
location /connector/ { location /connector/ {
# SuperOffice Connector Webhook & Dashboard # SuperOffice Connector Webhook & Dashboard
# Auth enabled for dashboard access (webhook endpoint might need exclusion if public,
# but current webhook_app checks token param so maybe basic auth is fine for /dashboard?)
# For now, let's keep it open or use token.
# Ideally: /connector/webhook -> open, /connector/dashboard -> protected.
# Nginx doesn't support nested locations well for auth_basic override without duplicating.
# Simplified: Auth off globally for /connector/, rely on App logic or obscurity for now.
auth_basic off; auth_basic off;
# Forward to FastAPI app # Forward to FastAPI app

13
verify_db.py Normal file
View File

@@ -0,0 +1,13 @@
import sqlite3
DB_PATH = "/app/companies_v3_fixed_2.db"
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT name, description, convincing_arguments FROM personas")
rows = cursor.fetchall()
for row in rows:
print(f"Persona: {row[0]}")
print(f" Description: {row[1][:100]}...")
print(f" Convincing: {row[2][:100]}...")
print("-" * 20)
conn.close()