Stabilize Lead Engine calendar logic (v1.4) and integrate GTM Architect, B2B Assistant, and Transcription Tool into Docker stack [30388f42]
This commit is contained in:
@@ -1 +1 @@
|
|||||||
{"task_id": "30388f42-8544-8088-bc48-e59e9b973e91", "token": "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", "readme_path": null, "session_start_time": "2026-03-07T20:00:39.289761"}
|
{"task_id": "30388f42-8544-8088-bc48-e59e9b973e91", "token": "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", "readme_path": null, "session_start_time": "2026-03-08T08:46:14.857712"}
|
||||||
107
RELOCATION.md
107
RELOCATION.md
@@ -13,7 +13,9 @@ Diese Ports müssen auf der Firewall für den eingehenden Verkehr zur VM `10.10.
|
|||||||
| **2222** | `gitea` | Intranet | Gitea Git via SSH. |
|
| **2222** | `gitea` | Intranet | Gitea Git via SSH. |
|
||||||
| **8003** | `connector-so` | **Public** | SuperOffice Webhook-Empfänger (SSL erforderlich!). |
|
| **8003** | `connector-so` | **Public** | SuperOffice Webhook-Empfänger (SSL erforderlich!). |
|
||||||
| **5678** | `n8n` | **Public** | Automation Workhooks. |
|
| **5678** | `n8n` | **Public** | Automation Workhooks. |
|
||||||
| **8004** | `lead-engine` | **Public** | Lead Engine API (für Buchungs-Links). |
|
| **8094** | `gtm-architect`| Intranet | GTM Architect Direct. |
|
||||||
|
| **8092** | `b2b-marketing`| Intranet | B2B Marketing Assistant Direct. |
|
||||||
|
| **8001** | `transcription`| Intranet | Transcription Tool Direct (via 8090). |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -22,61 +24,82 @@ Diese Ports müssen auf der Firewall für den eingehenden Verkehr zur VM `10.10.
|
|||||||
* **DNS Resolver:** In Nginx konfiguriert (`resolver 127.0.0.11`).
|
* **DNS Resolver:** In Nginx konfiguriert (`resolver 127.0.0.11`).
|
||||||
* **WebSockets:** Das Gateway unterstützt `Upgrade`-Header (kritisch für Streamlit/Lead-Engine).
|
* **WebSockets:** Das Gateway unterstützt `Upgrade`-Header (kritisch für Streamlit/Lead-Engine).
|
||||||
* **Echo-Prävention:** Der Connector (`worker.py`) identifiziert sich dynamisch. Keine manuellen ID-Einträge in `.env` nötig, solange `SO_CLIENT_ID` passt.
|
* **Echo-Prävention:** Der Connector (`worker.py`) identifiziert sich dynamisch. Keine manuellen ID-Einträge in `.env` nötig, solange `SO_CLIENT_ID` passt.
|
||||||
|
* **Routing:**
|
||||||
|
* `/ce/` -> `company-explorer:8000`
|
||||||
|
* `/lead/` -> `lead-engine:8501` (UI)
|
||||||
|
* `/feedback/` -> `lead-engine:8004` (API)
|
||||||
|
* `/gtm/` -> `gtm-architect:3005` (API/Frontend)
|
||||||
|
* `/b2b/` -> `b2b-marketing-assistant:3002` (API/Frontend)
|
||||||
|
* `/tr/` -> `transcription-tool:8001` (API/Frontend) -> **Achtung:** Benötigt expliziten `rewrite` in Nginx!
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# ⚠️ Kritische Lehren (Post-Mortem 07.03.2026)
|
# ⚠️ Kritische Lehren (Update 08.03.2026)
|
||||||
|
|
||||||
Der Umzug am Montag muss zwingend diese Punkte beachten, um den "Totalausfall" von heute zu vermeiden:
|
Der Umzug muss zwingend diese Punkte beachten, um den "Totalausfall" zu vermeiden:
|
||||||
|
|
||||||
### 1. Datenbank-Schema-Falle
|
### 1. Datenbank-Schema & Volumes
|
||||||
**Problem:** Alte `.db`-Dateien (Backups) haben oft nicht alle Spalten, die der aktuelle Code erwartet.
|
|
||||||
**Lösung:** Nach dem Starten der Container auf der neuen VM **muss** das Migrations-Skript ausgeführt werden:
|
|
||||||
```bash
|
|
||||||
docker exec -it company-explorer python /app/fix_missing_columns.py
|
|
||||||
```
|
|
||||||
*Dies repariert Tabellen für Companies, Industries und Contacts (inkl. Unsubscribe-Tokens).*
|
|
||||||
|
|
||||||
### 2. Docker Volumes vs. Bind Mounts
|
|
||||||
**Regel:** Datenbanken werden **NIEMALS** mehr direkt auf einen Host-Pfad gemountet.
|
**Regel:** Datenbanken werden **NIEMALS** mehr direkt auf einen Host-Pfad gemountet.
|
||||||
**Grund:** Permission-Errors und SQLite-Locks ("Database is locked") auf Netzwerk-Dateisystemen (Synology/NFS).
|
**Grund:** Permission-Errors und SQLite-Locks auf Netzwerk-Dateisystemen.
|
||||||
**Vorgehen:** Nutzung von benannten Volumes (`explorer_db_data`, `connector_db_data`, `lead_engine_data`).
|
**Vorgehen:** Nutzung von benannten Volumes (`explorer_db_data`, `connector_db_data`, `lead_engine_data`, `gtm_architect_data`, `b2b_marketing_data`, `transcription_uploads`).
|
||||||
|
|
||||||
### 3. Daten-Injektion (Der "Rescue"-Befehl)
|
### 2. Lead Engine: Kalender-Logik (v1.4)
|
||||||
Um bestehende Daten in die neuen Volumes zu bekommen, nutzt man diesen Befehl:
|
* **Raster:** Das System bietet Termine nur im **15-Minuten-Takt** (:00, :15, :30, :45) an.
|
||||||
```bash
|
* **Abstand:** Zwischen zwei Terminvorschlägen liegen mind. **3 Stunden** Pause.
|
||||||
# Beispiel für den Company Explorer
|
* **AppOnly Workaround:** Termin wird im Kalender von `info@robo-planet.de` erstellt und der Mitarbeiter (`e.melcer@`) als Teilnehmer hinzugefügt.
|
||||||
docker cp ./my_local_backup.db company-explorer:/data/companies_v3_fixed_2.db
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Streamlit Proxy-Konfiguration
|
### 3. GTM Architect & B2B Assistant: Standalone-Betrieb
|
||||||
**Problem:** Streamlit (Lead Engine) verliert die Verbindung unter Sub-Pfaden (`/lead/`).
|
* **Architektur:** Beide Apps nutzen das "Self-Contained Image" Muster. Code, Frontend-Builds (`dist/`) und `node_modules` sind fest im Image verbaut.
|
||||||
**Lösung:**
|
* **GTM Port:** 3005 intern.
|
||||||
* Dockerfile: `--server.baseUrlPath=/lead` ist Pflicht.
|
* **B2B Port:** 3002 intern.
|
||||||
* Nginx: `proxy_http_version 1.1` und `Upgrade`-Header müssen gesetzt sein.
|
* **DB-Abhängigkeit:** Der B2B Assistant benötigt zwingend die Datei `market_db_manager.py` (wird beim Build aus dem Root kopiert).
|
||||||
|
|
||||||
|
### 4. Transcription Tool: FFmpeg & Routing
|
||||||
|
* **FFmpeg:** Muss im Image vorhanden sein (Build dauert ca. 15 Min auf Synology).
|
||||||
|
* **Pfade:** Das Tool benötigt eine `tsconfig.json` im `frontend/` Ordner für den TypeScript-Build.
|
||||||
|
* **Nginx:** Der Pfad `/tr/` muss explizit umgeschrieben werden: `rewrite ^/tr/(.*) /$1 break;`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### **Verbindlicher Migrationsplan (Montag)**
|
### 📂 Docker Volume Migration (Der "Plug & Play" Weg)
|
||||||
|
|
||||||
|
Um die Daten (Companies, Leads, Projekte, Audio-Files) ohne Verluste umzuziehen, müssen die benannten Volumes gesichert werden.
|
||||||
|
|
||||||
|
**Auf der Synology (Quelle):**
|
||||||
|
```bash
|
||||||
|
# Backup aller kritischen Volumes in ein Archiv
|
||||||
|
docker run --rm -v explorer_db_data:/data -v $(pwd):/backup alpine tar czf /backup/explorer_data.tar.gz -C /data .
|
||||||
|
docker run --rm -v lead_engine_data:/data -v $(pwd):/backup alpine tar czf /backup/lead_data.tar.gz -C /data .
|
||||||
|
docker run --rm -v gtm_architect_data:/data -v $(pwd):/backup alpine tar czf /backup/gtm_data.tar.gz -C /data .
|
||||||
|
docker run --rm -v b2b_marketing_data:/data -v $(pwd):/backup alpine tar czf /backup/b2b_data.tar.gz -C /data .
|
||||||
|
docker run --rm -v transcription_uploads:/data -v $(pwd):/backup alpine tar czf /backup/tr_uploads.tar.gz -C /data .
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
**Auf der Ubuntu VM (Ziel):**
|
||||||
|
1. Volumes anlegen: `docker volume create explorer_db_data` (etc.)
|
||||||
|
2. Daten wiederherstellen:
|
||||||
|
```bash
|
||||||
|
docker run --rm -v explorer_db_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/explorer_data.tar.gz"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### **Verbindlicher Migrationsplan**
|
||||||
|
|
||||||
**Phase 1: Vorbereitung**
|
**Phase 1: Vorbereitung**
|
||||||
1. [ ] **`git push`** auf Synology (aktuellster Stand d1b77fd2).
|
1. [ ] **`git push`** auf Synology (aktuellster Stand inkl. GTM-Integration).
|
||||||
2. [ ] **`.env` Datei sichern** (enthält alle heute stabilisierten Keys!).
|
2. [ ] **`.env` Datei sichern** (Vollständigkeit prüfen!).
|
||||||
|
3. [ ] **Volumes sichern** (siehe oben: `tar.gz` Erstellung).
|
||||||
|
|
||||||
**Phase 2: Deployment auf `docker1`**
|
**Phase 2: Deployment auf `docker1`**
|
||||||
1. [ ] Repo klonen: `git clone ... /opt/gtm-engine`
|
1. [ ] Repo klonen: `git clone ... /opt/gtm-engine`
|
||||||
2. [ ] `.env` anlegen und befüllen.
|
2. [ ] `.env` kopieren.
|
||||||
3. [ ] Starten: `docker compose up -d --build`
|
3. [ ] **Volumes restoren** (BEVOR `docker compose up` ausgeführt wird).
|
||||||
4. [ ] **Schema-Check:** `docker exec -it company-explorer python /app/fix_missing_columns.py` ausführen.
|
4. [ ] Starten: `docker compose up -d --build`
|
||||||
|
5. [ ] **Schema-Check:** `docker exec -it company-explorer python /app/fix_missing_columns.py`
|
||||||
|
|
||||||
**Phase 3: Datenrettung (Falls nötig)**
|
**Phase 3: Verifizierung**
|
||||||
1. [ ] Datenbank-Dumps vom Wochenende via `docker cp` in die Volumes schieben.
|
1. [ ] Check Kalender-Lesen: `docker exec lead-engine python /app/trading_twins/test_calendar_logic.py`
|
||||||
2. [ ] Webhook-Status prüfen: `docker exec -it connector-superoffice python /app/register_webhook.py`.
|
2. [ ] Check GTM Architect: `https://10.10.81.2:8090/gtm/`
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Aktuelle Offene Todos (Priorisiert)**
|
|
||||||
|
|
||||||
1. **Lead Engine:** Microsoft Graph Credentials in `.env` ergänzen (401 Fehler beheben).
|
|
||||||
2. **n8n:** Workflow-Export von Synology und Import auf neuer Instanz.
|
|
||||||
3. **Styling:** Frontend-CSS im Company Explorer verifizieren (Build läuft wieder, aber UI-Check nötig).
|
|
||||||
@@ -36,15 +36,20 @@ RUN pip install --no-cache-dir -r requirements.txt
|
|||||||
# Copy the Node.js server and its production dependencies manifest
|
# Copy the Node.js server and its production dependencies manifest
|
||||||
COPY b2b-marketing-assistant/server.cjs .
|
COPY b2b-marketing-assistant/server.cjs .
|
||||||
COPY b2b-marketing-assistant/package.json .
|
COPY b2b-marketing-assistant/package.json .
|
||||||
|
COPY helpers.py .
|
||||||
|
COPY config.py .
|
||||||
|
COPY market_db_manager.py .
|
||||||
|
|
||||||
# Install only production dependencies for the Node.js server
|
# Install dependencies for the Node.js server
|
||||||
RUN npm install --omit=dev
|
RUN npm install
|
||||||
|
RUN npm install express cors
|
||||||
|
|
||||||
# Copy the built React app from the builder stage
|
# Copy the built React app from the builder stage
|
||||||
COPY --from=frontend-builder /app/dist ./dist
|
COPY --from=frontend-builder /app/dist ./dist
|
||||||
|
|
||||||
# Copy the main Python orchestrator script from the project root
|
# Copy the main Python orchestrator script from the project root
|
||||||
COPY b2b_marketing_orchestrator.py .
|
COPY b2b-marketing-assistant/b2b_marketing_orchestrator.py .
|
||||||
|
COPY b2b-marketing-assistant/services ./services
|
||||||
|
|
||||||
# Expose the port the Node.js server will run on
|
# Expose the port the Node.js server will run on
|
||||||
EXPOSE 3002
|
EXPOSE 3002
|
||||||
|
|||||||
@@ -18,3 +18,20 @@ View your app in AI Studio: https://ai.studio/apps/drive/1ZPnGbhaEnyhIyqs2rYhcPX
|
|||||||
2. Set the `GEMINI_API_KEY` in the central `.env` file in the project's root directory.
|
2. Set the `GEMINI_API_KEY` in the central `.env` file in the project's root directory.
|
||||||
3. Run the app:
|
3. Run the app:
|
||||||
`npm run dev`
|
`npm run dev`
|
||||||
|
|
||||||
|
## Docker Deployment (Plug & Play)
|
||||||
|
|
||||||
|
The **B2B Marketing Assistant** is integrated into the central `docker-compose.yml`.
|
||||||
|
|
||||||
|
### Start Service
|
||||||
|
```bash
|
||||||
|
# Build and start
|
||||||
|
docker-compose up -d --build b2b-marketing-assistant
|
||||||
|
```
|
||||||
|
|
||||||
|
### Details
|
||||||
|
* **External Port:** `8092`
|
||||||
|
* **Subpath:** `/b2b/`
|
||||||
|
* **Persistence:** Project data is stored in the `b2b_marketing_data` Docker volume.
|
||||||
|
* **Base URL:** The frontend is served under the `/b2b/` prefix via Nginx.
|
||||||
|
|
||||||
|
|||||||
@@ -23,6 +23,12 @@ services:
|
|||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
lead-engine:
|
lead-engine:
|
||||||
condition: service_started
|
condition: service_started
|
||||||
|
gtm-architect:
|
||||||
|
condition: service_started
|
||||||
|
b2b-marketing-assistant:
|
||||||
|
condition: service_started
|
||||||
|
transcription-tool:
|
||||||
|
condition: service_started
|
||||||
|
|
||||||
# --- DASHBOARD ---
|
# --- DASHBOARD ---
|
||||||
dashboard:
|
dashboard:
|
||||||
@@ -33,6 +39,52 @@ services:
|
|||||||
- ./dashboard:/usr/share/nginx/html:ro
|
- ./dashboard:/usr/share/nginx/html:ro
|
||||||
|
|
||||||
# --- APPS ---
|
# --- APPS ---
|
||||||
|
transcription-tool:
|
||||||
|
build:
|
||||||
|
context: ./transcription-tool
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
container_name: transcription-tool
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8001:8001"
|
||||||
|
environment:
|
||||||
|
GEMINI_API_KEY: "${GEMINI_API_KEY}"
|
||||||
|
UPLOAD_DIR: "/app/uploads"
|
||||||
|
volumes:
|
||||||
|
- transcription_uploads:/app/uploads
|
||||||
|
- ./Log_from_docker:/app/logs_debug
|
||||||
|
|
||||||
|
b2b-marketing-assistant:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: b2b-marketing-assistant/Dockerfile
|
||||||
|
container_name: b2b-marketing-assistant
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8092:3002"
|
||||||
|
environment:
|
||||||
|
GEMINI_API_KEY: "${GEMINI_API_KEY}"
|
||||||
|
PYTHONUNBUFFERED: "1"
|
||||||
|
volumes:
|
||||||
|
- b2b_marketing_data:/data
|
||||||
|
- ./Log_from_docker:/app/logs_debug
|
||||||
|
|
||||||
|
gtm-architect:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: gtm-architect/Dockerfile
|
||||||
|
container_name: gtm-architect
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8094:80"
|
||||||
|
environment:
|
||||||
|
GEMINI_API_KEY: "${GEMINI_API_KEY}"
|
||||||
|
VITE_API_BASE_URL: "/gtm/api"
|
||||||
|
GTM_DB_PATH: "/data/gtm_projects.db"
|
||||||
|
volumes:
|
||||||
|
- ./Log_from_docker:/app/logs_debug
|
||||||
|
- gtm_architect_data:/data
|
||||||
|
|
||||||
company-explorer:
|
company-explorer:
|
||||||
build:
|
build:
|
||||||
context: ./company-explorer
|
context: ./company-explorer
|
||||||
@@ -144,3 +196,6 @@ volumes:
|
|||||||
connector_db_data: {}
|
connector_db_data: {}
|
||||||
explorer_db_data: {}
|
explorer_db_data: {}
|
||||||
lead_engine_data: {}
|
lead_engine_data: {}
|
||||||
|
gtm_architect_data: {}
|
||||||
|
b2b_marketing_data: {}
|
||||||
|
transcription_uploads: {}
|
||||||
|
|||||||
@@ -59,7 +59,27 @@ Der **Meeting Assistant** ist eine leistungsstarke Suite zur Transkription und B
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 4. Roadmap
|
## 4. Docker Deployment (Plug & Play)
|
||||||
|
|
||||||
|
Der **Meeting Assistant** ist vollständig in die zentrale `docker-compose.yml` integriert.
|
||||||
|
|
||||||
|
### Inbetriebname
|
||||||
|
```bash
|
||||||
|
# Build & Start
|
||||||
|
docker-compose up -d --build transcription-tool
|
||||||
|
|
||||||
|
# Logs überwachen
|
||||||
|
docker logs -f transcription-tool
|
||||||
|
```
|
||||||
|
|
||||||
|
### Konfiguration
|
||||||
|
* **Port:** Intern `8001`.
|
||||||
|
* **Persistenz:** Audio-Uploads werden im benannten Volume `transcription_uploads` gespeichert (`/app/uploads` im Container).
|
||||||
|
* **Routing:** Das Tool läuft unter dem Pfad `/tr/`. Nginx muss das Präfix strippen: `rewrite ^/tr/(.*) /$1 break;`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Roadmap
|
||||||
|
|
||||||
* **v0.7: Search:** Globale Suche über alle Transkripte hinweg.
|
* **v0.7: Search:** Globale Suche über alle Transkripte hinweg.
|
||||||
* **v0.8: Q&A an das Meeting:** Ermöglicht, Fragen direkt an das Transkript zu stellen ("Was wurde zu Thema X beschlossen?").
|
* **v0.8: Q&A an das Meeting:** Ermöglicht, Fragen direkt an das Transkript zu stellen ("Was wurde zu Thema X beschlossen?").
|
||||||
|
|||||||
@@ -40,10 +40,10 @@ COPY gtm-architect/gtm_db_manager.py .
|
|||||||
|
|
||||||
# Install Python and Node.js dependencies
|
# Install Python and Node.js dependencies
|
||||||
RUN pip install --no-cache-dir -r requirements.txt
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
RUN npm install --omit=dev
|
RUN npm install --force
|
||||||
|
|
||||||
# Expose the port the server will run on
|
# Expose the port the server will run on
|
||||||
EXPOSE 3005
|
EXPOSE 3005
|
||||||
|
|
||||||
# Command to run the server, ensuring dependencies are fresh on start
|
# Command to run the server
|
||||||
CMD ["/bin/bash", "-c", "pip install --no-cache-dir -r requirements.txt && node server.cjs"]
|
CMD ["node", "server.cjs"]
|
||||||
|
|||||||
@@ -18,3 +18,20 @@ View your app in AI Studio: https://ai.studio/apps/drive/1bvzSOz-NYMzDph6718RuAy
|
|||||||
2. Set the `GEMINI_API_KEY` in [.env.local](.env.local) to your Gemini API key
|
2. Set the `GEMINI_API_KEY` in [.env.local](.env.local) to your Gemini API key
|
||||||
3. Run the app:
|
3. Run the app:
|
||||||
`npm run dev`
|
`npm run dev`
|
||||||
|
|
||||||
|
## Docker Deployment (Plug & Play)
|
||||||
|
|
||||||
|
The **GTM Architect** is fully integrated into the project's `docker-compose.yml`.
|
||||||
|
|
||||||
|
### Start Service
|
||||||
|
```bash
|
||||||
|
# Build and start
|
||||||
|
docker-compose up -d --build gtm-architect
|
||||||
|
```
|
||||||
|
|
||||||
|
### Technical Specs
|
||||||
|
* **External Port:** `8094`
|
||||||
|
* **Subpath:** `/gtm/`
|
||||||
|
* **Persistence:** Data is stored in the `gtm_architect_data` Docker volume.
|
||||||
|
* **Self-Contained:** The image includes the built frontend and all Node.js/Python dependencies.
|
||||||
|
|
||||||
|
|||||||
27
gtm-architect/package-lock.json
generated
27
gtm-architect/package-lock.json
generated
@@ -8,8 +8,9 @@
|
|||||||
"name": "roboplanet-gtm-architect",
|
"name": "roboplanet-gtm-architect",
|
||||||
"version": "0.0.0",
|
"version": "0.0.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"cors": "^2.8.5",
|
"cors": "^2.8.6",
|
||||||
"express": "^4.19.2",
|
"dotenv": "^17.3.1",
|
||||||
|
"express": "^4.22.1",
|
||||||
"lucide-react": "^0.562.0",
|
"lucide-react": "^0.562.0",
|
||||||
"react": "^19.2.3",
|
"react": "^19.2.3",
|
||||||
"react-dom": "^19.2.3",
|
"react-dom": "^19.2.3",
|
||||||
@@ -1575,9 +1576,9 @@
|
|||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/cors": {
|
"node_modules/cors": {
|
||||||
"version": "2.8.5",
|
"version": "2.8.6",
|
||||||
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
|
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.6.tgz",
|
||||||
"integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
|
"integrity": "sha512-tJtZBBHA6vjIAaF6EnIaq6laBBP9aq/Y3ouVJjEfoHbRBcHBAHYcMh/w8LDrk2PvIMMq8gmopa5D4V8RmbrxGw==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"object-assign": "^4",
|
"object-assign": "^4",
|
||||||
@@ -1585,6 +1586,10 @@
|
|||||||
},
|
},
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">= 0.10"
|
"node": ">= 0.10"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"type": "opencollective",
|
||||||
|
"url": "https://opencollective.com/express"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/csstype": {
|
"node_modules/csstype": {
|
||||||
@@ -1665,6 +1670,18 @@
|
|||||||
"url": "https://github.com/sponsors/wooorm"
|
"url": "https://github.com/sponsors/wooorm"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/dotenv": {
|
||||||
|
"version": "17.3.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.3.1.tgz",
|
||||||
|
"integrity": "sha512-IO8C/dzEb6O3F9/twg6ZLXz164a2fhTnEWb95H23Dm4OuN+92NmEAlTrupP9VW6Jm3sO26tQlqyvyi4CsnY9GA==",
|
||||||
|
"license": "BSD-2-Clause",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://dotenvx.com"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/dunder-proto": {
|
"node_modules/dunder-proto": {
|
||||||
"version": "1.0.1",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
|
||||||
|
|||||||
@@ -9,13 +9,14 @@
|
|||||||
"preview": "vite preview"
|
"preview": "vite preview"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"react": "^19.2.3",
|
"cors": "^2.8.6",
|
||||||
"react-markdown": "^10.1.0",
|
"dotenv": "^17.3.1",
|
||||||
"remark-gfm": "^4.0.0",
|
"express": "^4.22.1",
|
||||||
"react-dom": "^19.2.3",
|
|
||||||
"lucide-react": "^0.562.0",
|
"lucide-react": "^0.562.0",
|
||||||
"express": "^4.19.2",
|
"react": "^19.2.3",
|
||||||
"cors": "^2.8.5"
|
"react-dom": "^19.2.3",
|
||||||
|
"react-markdown": "^10.1.0",
|
||||||
|
"remark-gfm": "^4.0.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^22.14.0",
|
"@types/node": "^22.14.0",
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ const port = 3005;
|
|||||||
|
|
||||||
// --- DATABASE INITIALIZATION ---
|
// --- DATABASE INITIALIZATION ---
|
||||||
// Initialize the SQLite database on startup to ensure the 'gtm_projects' table exists.
|
// Initialize the SQLite database on startup to ensure the 'gtm_projects' table exists.
|
||||||
const dbScript = path.join(__dirname, '../gtm_db_manager.py'); // CORRECTED PATH
|
const dbScript = path.join(__dirname, 'gtm_db_manager.py'); // CORRECTED PATH
|
||||||
console.log(`[Init] Initializing database via ${dbScript}...`);
|
console.log(`[Init] Initializing database via ${dbScript}...`);
|
||||||
const initProcess = spawn('python3', [dbScript, 'init']);
|
const initProcess = spawn('python3', [dbScript, 'init']);
|
||||||
|
|
||||||
|
|||||||
@@ -1,49 +1,38 @@
|
|||||||
# Lead Engine: Multi-Source Automation v1.3 [31988f42]
|
# Lead Engine: Multi-Source Automation v1.4 [31988f42]
|
||||||
|
|
||||||
## 🚀 Übersicht
|
## 🚀 Übersicht
|
||||||
Die **Lead Engine** ist ein spezialisiertes Modul zur autonomen Verarbeitung von B2B-Anfragen aus verschiedenen Quellen. Sie fungiert als Brücke zwischen dem E-Mail-Postfach und dem **Company Explorer**, um innerhalb von Minuten hochgradig personalisierte Antwort-Entwürfe auf "Human Expert Level" zu generieren.
|
Die **Lead Engine** ist ein spezialisiertes Modul zur autonomen Verarbeitung von B2B-Anfragen. Sie fungiert als Brücke zwischen dem E-Mail-Postfach und dem **Company Explorer**, um innerhalb von Minuten hochgradig personalisierte Antwort-Entwürfe auf "Human Expert Level" zu generieren.
|
||||||
|
|
||||||
## 🛠 Hauptfunktionen
|
## 🛠 Hauptfunktionen
|
||||||
|
|
||||||
### 1. Intelligenter E-Mail Ingest
|
### 1. Intelligenter E-Mail Ingest
|
||||||
* **Multi-Source:** Überwacht das Postfach `info@robo-planet.de` via **Microsoft Graph API** auf verschiedene Lead-Typen.
|
* **Multi-Source:** Überwacht das Postfach `info@robo-planet.de` via **Microsoft Graph API**.
|
||||||
* **Filter & Routing:** Erkennt und unterscheidet Anfragen von **TradingTwins** und dem **Roboplanet-Kontaktformular**.
|
* **Filter & Routing:** Unterscheidet Anfragen von **TradingTwins** und dem **Kontaktformular**.
|
||||||
* **Parsing:** Spezialisierte HTML-Parser extrahieren für jede Quelle strukturierte Daten (Firma, Kontakt, Bedarf, etc.).
|
* **Parsing:** Spezialisierte HTML-Parser extrahieren strukturierte Daten (Firma, Kontakt, Bedarf).
|
||||||
|
|
||||||
### 2. Contact Research (LinkedIn Lookup)
|
### 2. Contact Research (LinkedIn Lookup)
|
||||||
* **Automatisierung:** Sucht via **SerpAPI** und **Gemini 2.0 Flash** nach der beruflichen Position des Ansprechpartners.
|
* **Automatisierung:** Sucht via **SerpAPI** und **Gemini 2.0 Flash** nach der beruflichen Position.
|
||||||
* **Ergebnis:** Identifiziert Rollen wie "CFO", "Mitglied der Klinikleitung" oder "Facharzt", um den Tonfall der Antwort perfekt anzupassen.
|
* **Ergebnis:** Identifiziert Rollen (z.B. "CFO"), um den Tonfall anzupassen.
|
||||||
|
|
||||||
### 3. Company Explorer Sync & Monitoring
|
### 3. Company Explorer Sync & Monitoring
|
||||||
* **Integration:** Legt Accounts und Kontakte automatisch im CE an.
|
* **Integration:** Legt Accounts und Kontakte automatisch im CE an.
|
||||||
* **Monitor:** Ein Hintergrund-Prozess (`monitor.py`) überwacht asynchron den Status der KI-Analyse im CE.
|
* **Monitor:** Hintergrund-Prozess (`monitor.py`) überwacht den Analyse-Status.
|
||||||
* **Daten-Pull:** Sobald die Analyse (Branche, Dossier) fertig ist, werden die Daten in die lokale Lead-Datenbank übernommen.
|
* **Daten-Pull:** Übernimmt Branche und Dossier in die lokale Lead-Datenbank.
|
||||||
|
|
||||||
### 4. Expert Response Generator
|
### 4. Expert Response Generator
|
||||||
* **KI-Engine:** Nutzt Gemini 2.0 Flash zur Erstellung von E-Mail-Entwürfen.
|
* **KI-Engine:** Gemini 2.0 Flash erstellt E-Mail-Entwürfe.
|
||||||
* **Kontext:** Kombiniert Lead-Daten (Fläche) + CE-Daten (Dossier) + Matrix-Argumente (Pains/Gains).
|
* **Kontext:** Kombiniert Lead-Daten + CE-Daten + Matrix-Argumente (Pains/Gains).
|
||||||
* **Persistente Entwürfe:** Generierte E-Mail-Entwürfe werden direkt beim Lead gespeichert und bleiben erhalten.
|
|
||||||
|
|
||||||
### 5. UI & Qualitätskontrolle
|
### 5. Trading Twins Autopilot (PRODUKTIV v2.1)
|
||||||
* **Visuelle Unterscheidung:** Klare Kennzeichnung der Lead-Quelle (z.B. 🌐 für Website, 🤝 für Partner) in der Übersicht.
|
|
||||||
* **Status-Tracking:** Visueller Indikator (🆕/✅) für den Synchronisations-Status mit dem Company Explorer.
|
|
||||||
* **Low-Quality-Warnung:** Visuelle Kennzeichnung (⚠️) von Leads mit Free-Mail-Adressen oder ohne Firmennamen direkt in der Übersicht.
|
|
||||||
|
|
||||||
### 6. Trading Twins Autopilot (PRODUKTIV v2.0)
|
|
||||||
Der vollautomatische "Zero Touch" Workflow für Trading Twins Anfragen.
|
Der vollautomatische "Zero Touch" Workflow für Trading Twins Anfragen.
|
||||||
|
|
||||||
* **Human-in-the-Loop:** Vor Versand erhält Elizabeta Melcer eine Teams-Nachricht ("Approve/Deny") via Adaptive Card.
|
* **Human-in-the-Loop:** Elizabeta Melcer erhält eine Teams-Nachricht ("Approve/Deny").
|
||||||
* **Feedback-Server:** Ein integrierter FastAPI-Server (Port 8004) verarbeitet die Klicks aus Teams und gibt sofortiges visuelles Feedback.
|
* **Feedback-Server:** Ein integrierter FastAPI-Server (Port 8004) verarbeitet Klicks.
|
||||||
* **Direct Calendar Booking (Eigener Service):**
|
* **Direct Calendar Booking (Micro-Service):**
|
||||||
* **Problem:** MS Bookings API lässt sich nicht per Application Permission steuern (Erstellung verboten).
|
* **Logik:** Prüft den Kalender von `e.melcer` auf **echte Verfügbarkeit**.
|
||||||
* **Lösung:** Wir haben einen eigenen Micro-Booking-Service gebaut.
|
* **Raster:** Termine starten nur im **15-Minuten-Takt** (:00, :15, :30, :45).
|
||||||
* **Ablauf:** Das System prüft echte freie Slots im Kalender von `e.melcer` (via Graph API).
|
* **Abstand:** Bietet zwei Termine an, mit ca. **3 Stunden Pause** dazwischen.
|
||||||
* **E-Mail:** Der Kunde erhält eine E-Mail mit zwei konkreten Terminvorschlägen (Links).
|
* **Buchung:** Klick auf Link -> Server erstellt Outlook-Termin von `info@` mit `e.melcer` als Teilnehmer.
|
||||||
* **Buchung:** Klick auf einen Link -> Server bestätigt -> **Echte Outlook-Kalendereinladung** wird automatisch von `info@` versendet.
|
|
||||||
* **Technologie:**
|
|
||||||
* **Teams Webhook:** Für interaktive "Adaptive Cards".
|
|
||||||
* **Graph API:** Für E-Mail-Versand (`info@`) und Kalender-Check (`e.melcer`).
|
|
||||||
* **Orchestrator (`manager.py`):** Steuert den Ablauf (Lead -> CE -> Teams -> Timer -> Mail -> Booking).
|
|
||||||
|
|
||||||
## 🏗 Architektur
|
## 🏗 Architektur
|
||||||
|
|
||||||
@@ -52,63 +41,55 @@ Der vollautomatische "Zero Touch" Workflow für Trading Twins Anfragen.
|
|||||||
├── app.py # Streamlit Web-Interface
|
├── app.py # Streamlit Web-Interface
|
||||||
├── trading_twins_ingest.py # E-Mail Importer (Graph API)
|
├── trading_twins_ingest.py # E-Mail Importer (Graph API)
|
||||||
├── monitor.py # Monitor + Trigger für Orchestrator
|
├── monitor.py # Monitor + Trigger für Orchestrator
|
||||||
├── trading_twins/ # [NEU] Autopilot Modul
|
├── trading_twins/ # Autopilot Modul
|
||||||
│ ├── manager.py # Orchestrator, FastAPI Server, Graph API Logic
|
│ ├── manager.py # Orchestrator, FastAPI, Graph API Logic
|
||||||
│ ├── signature.html # HTML-Signatur für E-Mails
|
│ ├── test_calendar_logic.py # Interner Test für Kalender-Zugriff
|
||||||
│ └── debug_bookings_only.py # Diagnose-Tool (Legacy)
|
│ └── signature.html # HTML-Signatur
|
||||||
├── db.py # Lokale Lead-Datenbank
|
└── db.py # Lokale SQLite Lead-Datenbank
|
||||||
└── data/ # DB-Storage
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 🚨 Lessons Learned & Troubleshooting (Critical)
|
## 🚨 Lessons Learned & Critical Fixes
|
||||||
|
|
||||||
### 1. Microsoft Bookings API Falle
|
### 1. Microsoft Graph API: Kalender-Zugriff
|
||||||
* **Problem:** Wir wollten `Bookings.Manage.All` nutzen, um eine Buchungsseite für `info@` zu erstellen.
|
* **Problem:** `debug_calendar.py` scheiterte oft mit `Invalid parameter`.
|
||||||
* **Fehler:** `403 Forbidden` ("Api Business.Create does not support the token type: App") und `500 Internal Server Error` (bei `GET`).
|
* **Ursache:** URL-Encoding von Zeitstempeln (`+` wurde zu Leerzeichen) und Mikrosekunden (7 Stellen statt 6).
|
||||||
* **Erkenntnis:** Eine App (Service Principal) kann zwar Bookings *verwalten*, aber **nicht initial erstellen**. Die erste Seite muss zwingend manuell oder per Delegated-User angelegt werden. Zudem erfordert der Zugriff oft eine User-Lizenz, die Service Principals nicht haben.
|
* **Lösung:** Nutzung von `requests(params=...)` und Abschneiden der Mikrosekunden.
|
||||||
* **Lösung:** Umstieg auf **Direct Calendar Booking** (Graph API `Calendar.ReadWrite`). Wir schreiben Termine direkt in den Outlook-Kalender, statt über die Bookings-Schicht zu gehen. Das ist robuster und voll automatisierbar.
|
* **Endpoint:** `/users/{email}/calendar/getSchedule` (POST) ist robuster als `/calendarView` (GET).
|
||||||
|
|
||||||
### 4. Exchange AppOnly AccessPolicy
|
### 2. Exchange AppOnly AccessPolicy (Buchungs-Workaround)
|
||||||
* **Problem:** Trotz globaler `Calendars.ReadWrite` Berechtigung schlug das Erstellen von Terminen im Kalender von `e.melcer@` fehl (`403 Forbidden: Blocked by tenant configured AppOnly AccessPolicy settings`).
|
* **Problem:** `Calendars.ReadWrite` erlaubt einer App oft nicht, Termine in *fremden* Kalendern (`e.melcer@`) zu erstellen (`403 Forbidden`).
|
||||||
* **Erkenntnis:** Viele Organisationen schränken per Policy ein, auf welche Postfächer eine App zugreifen darf. Ein Zugriff auf "fremde" Postfächer ist oft standardmäßig gesperrt.
|
* **Lösung:** Der Termin wird im **eigenen Kalender** des Service-Accounts (`info@`) erstellt. Der Mitarbeiter (`e.melcer@`) wird als **Teilnehmer** hinzugefügt. Das umgeht die Policy.
|
||||||
* **Lösung:** Der Termin wird im **eigenen Kalender** des Service-Accounts (`info@robo-planet.de`) erstellt. Der zuständige Mitarbeiter (`e.melcer@`) wird als **erforderlicher Teilnehmer** hinzugefügt. Dies umgeht die Policy-Sperre und stellt sicher, dass der Mitarbeiter den Termin in seinem Kalender sieht und das Teams-Meeting voll steuern kann.
|
|
||||||
|
|
||||||
## 🚀 Inbetriebnahme (Docker)
|
### 3. Docker Environment Variables
|
||||||
|
* **Problem:** Skripte im Container fanden Credentials nicht, obwohl sie in `.env` standen.
|
||||||
|
* **Lösung:** Explizites `load_dotenv` ist in Standalone-Skripten (`test_*.py`) nötig. Im Hauptprozess (`manager.py`) reicht `os.getenv`, solange Docker Compose die Vars korrekt durchreicht.
|
||||||
|
|
||||||
## 🚀 Inbetriebnahme (Docker)
|
## 🚀 Inbetriebnahme
|
||||||
|
|
||||||
Die Lead Engine ist als Service in der zentralen `docker-compose.yml` integriert.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Neustart des Dienstes nach Code-Änderungen
|
# Neustart des Dienstes
|
||||||
docker-compose up -d --build --force-recreate lead-engine
|
docker-compose up -d --build --force-recreate lead-engine
|
||||||
|
|
||||||
|
# Manueller Test (intern)
|
||||||
|
docker exec lead-engine python /app/trading_twins/test_calendar_logic.py
|
||||||
```
|
```
|
||||||
|
|
||||||
**Zugriff:** `https://floke-ai.duckdns.org/lead/` (Passwortgeschützt)
|
**Zugriff:** `https://floke-ai.duckdns.org/lead/` (Passwortgeschützt)
|
||||||
**Feedback API:** `https://floke-ai.duckdns.org/feedback/` (Öffentlich)
|
|
||||||
|
|
||||||
## 📝 Credentials (.env)
|
## 📝 Credentials (.env)
|
||||||
|
|
||||||
Für den Betrieb sind folgende Variablen in der zentralen `.env` zwingend erforderlich:
|
|
||||||
|
|
||||||
```env
|
```env
|
||||||
# App 1: Info-Postfach (Schreiben)
|
# Info-Postfach (App 1 - Schreiben)
|
||||||
INFO_Application_ID=...
|
INFO_Application_ID=...
|
||||||
INFO_Tenant_ID=...
|
INFO_Tenant_ID=...
|
||||||
INFO_Secret=...
|
INFO_Secret=...
|
||||||
|
|
||||||
# App 2: E.Melcer Kalender (Lesen)
|
# E.Melcer Kalender (App 2 - Lesen)
|
||||||
CAL_APPID=...
|
CAL_APPID=...
|
||||||
CAL_TENNANT_ID=...
|
CAL_TENNANT_ID=...
|
||||||
CAL_SECRET=...
|
CAL_SECRET=...
|
||||||
|
|
||||||
# Teams
|
# URLs
|
||||||
TEAMS_WEBHOOK_URL=...
|
TEAMS_WEBHOOK_URL=...
|
||||||
|
|
||||||
# Public URL
|
|
||||||
FEEDBACK_SERVER_BASE_URL=https://floke-ai.duckdns.org/feedback
|
FEEDBACK_SERVER_BASE_URL=https://floke-ai.duckdns.org/feedback
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
|
||||||
*Dokumentationsstand: 5. März 2026*
|
|
||||||
*Task: [31988f42]*
|
|
||||||
@@ -8,6 +8,10 @@ from threading import Thread, Lock
|
|||||||
import uvicorn
|
import uvicorn
|
||||||
from fastapi import FastAPI, Response, BackgroundTasks
|
from fastapi import FastAPI, Response, BackgroundTasks
|
||||||
import msal
|
import msal
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
# Load environment variables from /app/.env
|
||||||
|
load_dotenv(dotenv_path="/app/.env", override=True)
|
||||||
|
|
||||||
# --- Zeitzonen-Konfiguration ---
|
# --- Zeitzonen-Konfiguration ---
|
||||||
TZ_BERLIN = ZoneInfo("Europe/Berlin")
|
TZ_BERLIN = ZoneInfo("Europe/Berlin")
|
||||||
@@ -60,10 +64,15 @@ def check_calendar_availability():
|
|||||||
"availabilityViewInterval": 60 # Check availability in 1-hour blocks
|
"availabilityViewInterval": 60 # Check availability in 1-hour blocks
|
||||||
}
|
}
|
||||||
|
|
||||||
url = f"{GRAPH_API_ENDPOINT}/users/{TARGET_EMAIL}/calendarView?startDateTime={start_time.isoformat()}&endDateTime={end_time.isoformat()}&$top=5"
|
url = f"{GRAPH_API_ENDPOINT}/users/{TARGET_EMAIL}/calendarView"
|
||||||
|
params = {
|
||||||
|
"startDateTime": start_time.isoformat(),
|
||||||
|
"endDateTime": end_time.isoformat(),
|
||||||
|
"$top": 5
|
||||||
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = requests.get(url, headers=headers)
|
response = requests.get(url, headers=headers, params=params)
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
events = response.json().get("value", [])
|
events = response.json().get("value", [])
|
||||||
if not events:
|
if not events:
|
||||||
@@ -75,6 +84,12 @@ def check_calendar_availability():
|
|||||||
subject = event.get('subject', 'No Subject')
|
subject = event.get('subject', 'No Subject')
|
||||||
start = event.get('start', {}).get('dateTime')
|
start = event.get('start', {}).get('dateTime')
|
||||||
if start:
|
if start:
|
||||||
|
# Fix for 7-digit microseconds from Graph API (e.g. 2026-03-09T17:00:00.0000000)
|
||||||
|
if "." in start:
|
||||||
|
main_part, frac_part = start.split(".")
|
||||||
|
# Truncate to 6 digits max or remove if empty
|
||||||
|
start = f"{main_part}.{frac_part[:6]}"
|
||||||
|
|
||||||
dt_obj = datetime.fromisoformat(start.replace('Z', '+00:00')).astimezone(TZ_BERLIN)
|
dt_obj = datetime.fromisoformat(start.replace('Z', '+00:00')).astimezone(TZ_BERLIN)
|
||||||
start_formatted = dt_obj.strftime('%A, %d.%m.%Y um %H:%M Uhr')
|
start_formatted = dt_obj.strftime('%A, %d.%m.%Y um %H:%M Uhr')
|
||||||
else: start_formatted = "N/A"
|
else: start_formatted = "N/A"
|
||||||
|
|||||||
@@ -47,21 +47,66 @@ def get_access_token(client_id, client_secret, tenant_id):
|
|||||||
return result.get('access_token')
|
return result.get('access_token')
|
||||||
|
|
||||||
def get_availability(target_email, app_creds):
|
def get_availability(target_email, app_creds):
|
||||||
|
print(f"DEBUG: Requesting availability for {target_email}")
|
||||||
token = get_access_token(*app_creds)
|
token = get_access_token(*app_creds)
|
||||||
if not token: return None
|
if not token:
|
||||||
|
print("DEBUG: Failed to acquire access token.")
|
||||||
|
return None
|
||||||
|
|
||||||
headers = {"Authorization": f"Bearer {token}", "Content-Type": "application/json", "Prefer": 'outlook.timezone="Europe/Berlin"'}
|
headers = {"Authorization": f"Bearer {token}", "Content-Type": "application/json", "Prefer": 'outlook.timezone="Europe/Berlin"'}
|
||||||
start_time = datetime.now(TZ_BERLIN).replace(hour=0, minute=0, second=0)
|
start_time = datetime.now(TZ_BERLIN).replace(hour=0, minute=0, second=0, microsecond=0)
|
||||||
end_time = start_time + timedelta(days=3)
|
end_time = start_time + timedelta(days=3)
|
||||||
payload = {"schedules": [target_email], "startTime": {"dateTime": start_time.isoformat()}, "endTime": {"dateTime": end_time.isoformat()}, "availabilityViewInterval": 60}
|
# Use 15-minute intervals for finer granularity
|
||||||
|
payload = {"schedules": [target_email], "startTime": {"dateTime": start_time.isoformat()}, "endTime": {"dateTime": end_time.isoformat()}, "availabilityViewInterval": 15}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
r = requests.post(f"{GRAPH_API_ENDPOINT}/users/{target_email}/calendar/getSchedule", headers=headers, json=payload)
|
url = f"{GRAPH_API_ENDPOINT}/users/{target_email}/calendar/getSchedule"
|
||||||
if r.status_code == 200: return start_time, r.json()['value'][0].get('availabilityView', ''), 60
|
r = requests.post(url, headers=headers, json=payload)
|
||||||
except: pass
|
print(f"DEBUG: API Status Code: {r.status_code}")
|
||||||
|
|
||||||
|
if r.status_code == 200:
|
||||||
|
view = r.json()['value'][0].get('availabilityView', '')
|
||||||
|
print(f"DEBUG: Availability View received (Length: {len(view)})")
|
||||||
|
return start_time, view, 15
|
||||||
|
else:
|
||||||
|
print(f"DEBUG: API Error Response: {r.text}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"DEBUG: Exception during API call: {e}")
|
||||||
|
pass
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def find_slots(start, view, interval):
|
def find_slots(start, view, interval):
|
||||||
# This logic is complex and proven, keeping it as is.
|
"""
|
||||||
return [datetime.now(TZ_BERLIN) + timedelta(days=1, hours=h) for h in [10, 14]] # Placeholder
|
Parses availability string: '0'=Free, '2'=Busy.
|
||||||
|
Returns 2 free slots (start times) within business hours (09:00 - 16:30),
|
||||||
|
excluding weekends (Sat/Sun), with approx. 3 hours distance between them.
|
||||||
|
"""
|
||||||
|
slots = []
|
||||||
|
first_slot = None
|
||||||
|
|
||||||
|
# Iterate through the view string
|
||||||
|
for i, status in enumerate(view):
|
||||||
|
if status == '0': # '0' means Free
|
||||||
|
slot_time = start + timedelta(minutes=i * interval)
|
||||||
|
|
||||||
|
# Constraints:
|
||||||
|
# 1. Mon-Fri only
|
||||||
|
# 2. Business hours (09:00 - 16:30)
|
||||||
|
# 3. Future only
|
||||||
|
if slot_time.weekday() < 5 and (9 <= slot_time.hour < 17) and slot_time > datetime.now(TZ_BERLIN):
|
||||||
|
# Max start time 16:30
|
||||||
|
if slot_time.hour == 16 and slot_time.minute > 30:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if first_slot is None:
|
||||||
|
first_slot = slot_time
|
||||||
|
slots.append(first_slot)
|
||||||
|
else:
|
||||||
|
# Second slot should be at least 3 hours after the first
|
||||||
|
if slot_time >= first_slot + timedelta(hours=3):
|
||||||
|
slots.append(slot_time)
|
||||||
|
break
|
||||||
|
return slots
|
||||||
|
|
||||||
def create_calendar_invite(lead_email, company, start_time):
|
def create_calendar_invite(lead_email, company, start_time):
|
||||||
catchall = os.getenv("EMAIL_CATCHALL"); lead_email = catchall if catchall else lead_email
|
catchall = os.getenv("EMAIL_CATCHALL"); lead_email = catchall if catchall else lead_email
|
||||||
|
|||||||
88
lead-engine/trading_twins/test_calendar_logic.py
Normal file
88
lead-engine/trading_twins/test_calendar_logic.py
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
# lead-engine/trading_twins/test_calendar_logic.py
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
import msal
|
||||||
|
import requests
|
||||||
|
|
||||||
|
# Load environment variables from the root .env
|
||||||
|
load_dotenv(dotenv_path="/app/.env", override=True)
|
||||||
|
|
||||||
|
# Pfad anpassen, damit wir manager importieren können
|
||||||
|
sys.path.append('/app')
|
||||||
|
|
||||||
|
from trading_twins.manager import get_availability, find_slots
|
||||||
|
|
||||||
|
# Re-import variables to ensure we see what's loaded
|
||||||
|
CAL_APPID = os.getenv("CAL_APPID")
|
||||||
|
CAL_SECRET = os.getenv("CAL_SECRET")
|
||||||
|
CAL_TENNANT_ID = os.getenv("CAL_TENNANT_ID")
|
||||||
|
|
||||||
|
TZ_BERLIN = ZoneInfo("Europe/Berlin")
|
||||||
|
|
||||||
|
def test_internal():
|
||||||
|
target = "e.melcer@robo-planet.de"
|
||||||
|
print(f"🔍 Teste Kalender-Logik für {target}...")
|
||||||
|
|
||||||
|
# Debug Token Acquisition
|
||||||
|
print("🔑 Authentifiziere mit MS Graph...")
|
||||||
|
authority = f"https://login.microsoftonline.com/{CAL_TENNANT_ID}"
|
||||||
|
app_msal = msal.ConfidentialClientApplication(client_id=CAL_APPID, authority=authority, client_credential=CAL_SECRET)
|
||||||
|
result = app_msal.acquire_token_silent([".default"], account=None)
|
||||||
|
if not result:
|
||||||
|
print(" ... hole neues Token ...")
|
||||||
|
result = app_msal.acquire_token_for_client(scopes=["https://graph.microsoft.com/.default"])
|
||||||
|
|
||||||
|
if "access_token" in result:
|
||||||
|
print("✅ Token erhalten.")
|
||||||
|
token = result['access_token']
|
||||||
|
else:
|
||||||
|
print(f"❌ Token-Fehler: {result.get('error')}")
|
||||||
|
print(f"❌ Beschreibung: {result.get('error_description')}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Debug API Call
|
||||||
|
print("📡 Frage Kalender ab...")
|
||||||
|
headers = {"Authorization": f"Bearer {token}", "Content-Type": "application/json", "Prefer": 'outlook.timezone="Europe/Berlin"'}
|
||||||
|
start_time = datetime.now(TZ_BERLIN).replace(hour=0, minute=0, second=0, microsecond=0)
|
||||||
|
end_time = start_time + timedelta(days=3)
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"schedules": [target],
|
||||||
|
"startTime": {"dateTime": start_time.isoformat(), "timeZone": "Europe/Berlin"},
|
||||||
|
"endTime": {"dateTime": end_time.isoformat(), "timeZone": "Europe/Berlin"},
|
||||||
|
"availabilityViewInterval": 15
|
||||||
|
}
|
||||||
|
|
||||||
|
import requests
|
||||||
|
try:
|
||||||
|
url = f"https://graph.microsoft.com/v1.0/users/{target}/calendar/getSchedule"
|
||||||
|
r = requests.post(url, headers=headers, json=payload)
|
||||||
|
|
||||||
|
print(f"📡 API Status: {r.status_code}")
|
||||||
|
if r.status_code == 200:
|
||||||
|
data = r.json()
|
||||||
|
# print(f"DEBUG RAW: {data}")
|
||||||
|
schedule = data['value'][0]
|
||||||
|
view = schedule.get('availabilityView', '')
|
||||||
|
print(f"✅ Verfügbarkeit (View Länge: {len(view)})")
|
||||||
|
|
||||||
|
# Test Slot Finding
|
||||||
|
slots = find_slots(start_time, view, 15)
|
||||||
|
if slots:
|
||||||
|
print(f"✅ {len(slots)} Slots gefunden:")
|
||||||
|
for s in slots:
|
||||||
|
print(f" 📅 {s.strftime('%A, %d.%m.%Y um %H:%M')}")
|
||||||
|
else:
|
||||||
|
print("⚠️ Keine Slots gefunden (Logik korrekt, aber Kalender voll?)")
|
||||||
|
else:
|
||||||
|
print(f"❌ API Fehler: {r.text}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Exception beim API Call: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_internal()
|
||||||
@@ -49,6 +49,37 @@ http {
|
|||||||
proxy_http_version 1.1;
|
proxy_http_version 1.1;
|
||||||
proxy_read_timeout 86400;
|
proxy_read_timeout 86400;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
location /gtm/ {
|
||||||
|
auth_basic "Restricted Access - Local AI Suite";
|
||||||
|
auth_basic_user_file /etc/nginx/.htpasswd;
|
||||||
|
proxy_pass http://gtm-architect:3005/;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
}
|
||||||
|
|
||||||
|
location /b2b/ {
|
||||||
|
auth_basic "Restricted Access - Local AI Suite";
|
||||||
|
auth_basic_user_file /etc/nginx/.htpasswd;
|
||||||
|
proxy_pass http://b2b-marketing-assistant:3002/;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
}
|
||||||
|
|
||||||
|
location /tr/ {
|
||||||
|
auth_basic "Restricted Access - Local AI Suite";
|
||||||
|
auth_basic_user_file /etc/nginx/.htpasswd;
|
||||||
|
rewrite ^/tr/(.*) /$1 break;
|
||||||
|
proxy_pass http://transcription-tool:8001;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
}
|
||||||
|
|
||||||
# Feedback API (public)
|
# Feedback API (public)
|
||||||
location /feedback/ {
|
location /feedback/ {
|
||||||
|
|||||||
Reference in New Issue
Block a user