[31f88f42] Keine neuen Commits in dieser Session.

Keine neuen Commits in dieser Session.
This commit is contained in:
2026-03-10 13:54:07 +00:00
parent a3f79db2d2
commit 3fd3c5acfa
8 changed files with 268 additions and 9 deletions

View File

@@ -1 +1 @@
{"task_id": "31b88f42-8544-8010-8793-e7688563a69e", "token": "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", "readme_path": "heatmap-tool", "session_start_time": "2026-03-09T14:44:44.115786"}
{"task_id": "31f88f42-8544-80ad-9573-e9f7f398e8b1", "token": "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", "readme_path": "connector-superoffice/README.md", "session_start_time": "2026-03-10T13:54:05.556762"}

View File

@@ -0,0 +1,102 @@
# Vollständige Entwicklungs-Historie des Account-Matching-Algorithmus
Diese Datei vereint die Git-Historie der Ursprungs-Datei (`duplicate_checker.py`) und der weiterentwickelten Datei (`company_deduplicator.py`).
Da beim Umbenennen in der Vergangenheit die Historie in Git getrennt wurde, ist hier der gesamte Verlauf chronologisch dokumentiert.
## Teil 2: Weiterentwicklung & Refactoring (Nov 2025 - heute)
*Pfad: _legacy_gsheets_system/company_deduplicator.py*
2026-03-07 | d1b77fd2 | [30388f42] Infrastructure Hardening: Repaired CE/Connector DB schema, fixed frontend styling build, implemented robust echo shield in worker v2.1.1, and integrated Lead Engine into gateway.
2026-01-07 | 95634d7b | feat(company-explorer): Initial Web UI & Backend with Enrichment Flow
2025-11-09 | 00edd44b | feat: Parent Account Logik für interne Deduplizierung integriert
2025-11-09 | 37182b3a | feat: Interne Deduplizierung implementieren und Skript refaktorieren
2025-11-08 | 99867225 | feat(duplicate_checker): Verbesserte Kandidatenauswahl und Match-Priorisierung
2025-11-06 | 1dd86d8e | duplicate_checker_old.py aktualisiert
2025-11-06 | 0a729f2d | duplicate_checker_old.py aktualisiert
2025-11-06 | a67615ad | duplicate_checker_old.py aktualisiert
2025-11-06 | 2df8441b | duplicate?checker_old.py hinzugefügt
---
## Teil 1: Ursprung & Experimente (Aug 2025 - Sep 2025)
*Pfad: ARCHIVE_legacy_scripts/duplicate_checker.py*
2026-03-07 | d1b77fd2 | [30388f42] Infrastructure Hardening: Repaired CE/Connector DB schema, fixed frontend styling build, implemented robust echo shield in worker v2.1.1, and integrated Lead Engine into gateway.
2025-09-24 | da9d97da | duplicate_checker.py aktualisiert
2025-09-24 | fa58a870 | duplicate_checker.py aktualisiert
2025-09-10 | 5fa5a292 | duplicate_checker.py aktualisiert
2025-09-10 | db696592 | duplicate_checker.py aktualisiert
2025-09-08 | ae975367 | NEU: Integration eines trainierten Machine-Learning-Modells (XGBoost) für die Match-Entscheidung
2025-09-05 | 24e32da5 | duplicate_checker.py aktualisiert
2025-09-05 | f5af3023 | duplicate_checker.py aktualisiert
2025-09-05 | 7a273bf2 | duplicate_checker.py aktualisiert
2025-09-05 | 538a0f28 | duplicate_checker.py aktualisiert
2025-09-05 | f160fc0f | duplicate_checker.py aktualisiert
2025-09-04 | 491254a8 | Feat: Matching-Logik mit gewichtetem Scoring & Interaktiv-Modus (v3.0)
2025-08-18 | 7cf23759 | duplicate_checker.py aktualisiert
2025-08-18 | b586bb3d | duplicate_checker.py aktualisiert
2025-08-18 | 7d07a526 | duplicate_checker.py aktualisiert
2025-08-18 | 721cb39c | duplicate_checker.py aktualisiert
2025-08-18 | af2e60f9 | duplicate_checker.py aktualisiert
2025-08-18 | 7d76a38c | duplicate_checker.py aktualisiert
2025-08-08 | e916e4eb | feat(duplicate-checker): Quality-first++ Domain-Gate, Location-Penalties, Smart Blocking (IDF-ligh
2025-08-08 | 56430d68 | duplicate_checker.py aktualisiert
2025-08-08 | 96ba680c | duplicate_checker.py aktualisiert
2025-08-08 | aea5d45c | feat(duplicate-checker): quality-first Matching (Domain-Gate, Location-Penalties, Smart Blocking)
2025-08-08 | 8e80a3f7 | duplicate_checker.py aktualisiert
2025-08-08 | f420b84a | duplicate_checker.py aktualisiert
2025-08-08 | ec56daa9 | duplicate_checker.py aktualisiert
2025-08-08 | be3f48ac | url_check nur für matching
2025-08-08 | aa4cf6ed | url check ergänzt
2025-08-06 | 4cd5dccc | duplicate_checker.py aktualisiert
2025-08-06 | e58e493e | duplicate_checker.py aktualisiert
2025-08-06 | 3febe145 | duplicate_checker.py aktualisiert
2025-08-06 | 63d014b0 | duplicate_checker.py aktualisiert
2025-08-06 | 4f6d51df | duplicate_checker.py aktualisiert
2025-08-06 | 558b75f3 | duplicate_checker.py aktualisiert
2025-08-06 | e43efa44 | duplicate_checker.py aktualisiert
2025-08-06 | cfd1e8b5 | duplicate_checker.py aktualisiert
2025-08-06 | 8d717f3b | duplicate_checker.py aktualisiert
2025-08-06 | 99dec723 | duplicate_checker.py aktualisiert
2025-08-06 | a3315eae | duplicate_checker.py aktualisiert
2025-08-06 | b9a046bd | Add Logging
2025-08-06 | 9193ab1a | duplicate_checker.py aktualisiert
2025-08-06 | 4aa1effe | duplicate_checker.py aktualisiert
2025-08-06 | dfcb270a | duplicate_checker.py aktualisiert
2025-08-06 | 6b4c8295 | duplicate_checker.py aktualisiert
2025-08-06 | 4a41ffb0 | duplicate_checker.py aktualisiert
2025-08-05 | 600b977a | duplicate_checker.py aktualisiert
2025-08-05 | b876ea20 | duplicate_checker.py aktualisiert
2025-08-05 | 2f70e05e | duplicate_checker.py aktualisiert
2025-08-05 | 9685bc5a | duplicate_checker.py aktualisiert
2025-08-05 | 270a5fc0 | duplicate_checker.py aktualisiert
2025-08-05 | 7d3821ad | chat GPT version
2025-08-04 | 3a8809e0 | duplicate_checker.py aktualisiert
2025-08-04 | 38612a85 | duplicate_checker.py aktualisiert
2025-08-04 | c777d75d | duplicate_checker.py aktualisiert
2025-08-04 | bc9591a4 | Add Logging
2025-08-04 | c0db46d2 | duplicate_checker.py aktualisiert
2025-08-03 | 7c9ee2f7 | duplicate_checker.py aktualisiert
2025-08-03 | 9cc291d5 | duplicate_checker.py aktualisiert
2025-08-03 | 40de8117 | duplicate_checker.py aktualisiert
2025-08-03 | c0cade7a | Add Logging
2025-08-03 | 940aa52b | duplicate_checker.py aktualisiert
2025-08-01 | 05ecb012 | Rückgang zur stabilen Version
2025-08-01 | e48e44ea | duplicate_checker.py aktualisiert
2025-08-01 | a10caa5a | revoce
2025-08-01 | add6ea53 | duplicate_checker.py aktualisiert
2025-08-01 | a58e4fc1 | duplicate_checker.py aktualisiert
2025-08-01 | 8a7426df | duplicate_checker.py aktualisiert
2025-08-01 | 533796b6 | duplicate_checker.py aktualisiert
2025-08-01 | 4f60cc68 | duplicate_checker.py aktualisiert
2025-08-01 | a6853a2c | duplicate_checker.py aktualisiert
2025-08-01 | 88c7ee4a | duplicate_checker.py aktualisiert
2025-08-01 | 77852b8a | duplicate_checker.py aktualisiert
2025-08-01 | 89ccd86f | revoce 2
2025-08-01 | 2341149c | revoce
2025-08-01 | a92da7f8 | duplicate_checker.py aktualisiert
2025-08-01 | aeda711d | duplicate_checker.py aktualisiert
2025-08-01 | f5e28824 | duplicate_checker.py aktualisiert
2025-08-01 | 94a2dc88 | duplicate_checker.py aktualisiert
2025-08-01 | e7c8a66f | duplicate_checker.py aktualisiert
2025-08-01 | 67b431b0 | duplicate_checker.py hinzugefügt

View File

@@ -107,6 +107,12 @@ class ReportMistakeRequest(BaseModel):
quote: Optional[str] = None
user_comment: Optional[str] = None
class CompanyMatchRequest(BaseModel):
name: str
website: Optional[str] = None
city: Optional[str] = None
country: Optional[str] = "Deutschland"
class ProvisioningRequest(BaseModel):
so_contact_id: int
so_person_id: Optional[int] = None
@@ -302,6 +308,58 @@ def unsubscribe_contact(token: str, db: Session = Depends(get_db)):
def health_check(username: str = Depends(authenticate_user)):
return {"status": "ok", "version": settings.VERSION, "db": settings.DATABASE_URL}
@app.post("/api/match-company/reload")
async def reload_matching_service(db: Session = Depends(get_db), username: str = Depends(authenticate_user)):
"""
Forces the matching service (Deduplicator) to reload all company records from DB.
Should be called after major imports or SuperOffice syncs.
"""
try:
app.state.deduplicator = Deduplicator(db)
return {
"status": "success",
"records_loaded": len(app.state.deduplicator.reference_data)
}
except Exception as e:
logger.error(f"Failed to reload matching service: {e}")
raise HTTPException(status_code=500, detail=str(e))
@app.post("/api/match-company")
async def match_company(request: CompanyMatchRequest, db: Session = Depends(get_db), username: str = Depends(authenticate_user)):
"""
Centralized Account Matching Service.
Checks if a company already exists in SuperOffice (via Company Explorer DB).
Returns list of matches with scores and CRM IDs.
"""
try:
# Lazy initialization of Deduplicator
if not hasattr(app.state, 'deduplicator'):
logger.info("Initializing Deduplicator for the first time...")
app.state.deduplicator = Deduplicator(db)
# Prepare Candidate dict for the service
candidate = {
'name': request.name,
'website': request.website,
'city': request.city,
'country': request.country
}
results = app.state.deduplicator.find_duplicates(candidate)
# Return structured results
return {
"query": candidate,
"match_found": len(results) > 0,
"best_match": results[0] if results else None,
"all_matches": results
}
except Exception as e:
logger.error(f"Error in company matching: {e}")
import traceback
logger.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=f"Matching failed: {str(e)}")
@app.post("/api/provision/superoffice-contact", response_model=ProvisioningResponse)
def provision_superoffice_contact(
req: ProvisioningRequest,

View File

@@ -63,7 +63,8 @@ class Deduplicator:
Optimized for 10k-50k records.
"""
logger.info("Loading reference data for deduplication...")
query = self.db.query(Company.id, Company.name, Company.website, Company.city, Company.country)
# Include crm_id in the query
query = self.db.query(Company.id, Company.name, Company.website, Company.city, Company.country, Company.crm_id)
companies = query.all()
for c in companies:
@@ -72,6 +73,7 @@ class Deduplicator:
record = {
'id': c.id,
'crm_id': c.crm_id,
'name': c.name,
'normalized_name': norm_name,
'normalized_domain': norm_domain,
@@ -81,7 +83,7 @@ class Deduplicator:
self.reference_data.append(record)
# Build Indexes
if norm_domain:
if norm_domain and norm_domain != "k.a.":
self.domain_index.setdefault(norm_domain, []).append(record)
# Token Frequency
@@ -113,7 +115,7 @@ class Deduplicator:
candidates_to_check = {} # Map ID -> Record
# 1. Domain Match (Fastest)
if c_norm_domain and c_norm_domain in self.domain_index:
if c_norm_domain and c_norm_domain != "k.a." and c_norm_domain in self.domain_index:
for r in self.domain_index[c_norm_domain]:
candidates_to_check[r['id']] = r
@@ -123,6 +125,14 @@ class Deduplicator:
for r in self.token_index[rtok]:
candidates_to_check[r['id']] = r
if not candidates_to_check:
# Fallback: if no domain or rare token match, we might have an exact name match that wasn't indexed correctly (e.g. all tokens are stop words)
# This is rare but possible. We check reference_data directly if name is short and candidate pool is empty.
if len(c_norm_name) > 3:
for r in self.reference_data:
if r['normalized_name'] == c_norm_name:
candidates_to_check[r['id']] = r
if not candidates_to_check:
return []
@@ -135,12 +145,14 @@ class Deduplicator:
)
# Threshold Logic (Weak vs Strong)
# A match is "weak" if there is no domain match AND no location match
is_weak = (details['domain_match'] == 0 and not (details['loc_match']))
threshold = SCORE_THRESHOLD_WEAK if is_weak else SCORE_THRESHOLD
if score >= threshold:
matches.append({
'company_id': db_rec['id'],
'crm_id': db_rec['crm_id'],
'name': db_rec['name'],
'score': score,
'details': details
@@ -155,11 +167,11 @@ class Deduplicator:
# Exact Name Shortcut
if n1 and n1 == n2:
return 100, {'exact': True, 'domain_match': 0, 'loc_match': 0}
return 100, {'exact': True, 'domain_match': 0, 'loc_match': 1 if (cand['c'] and ref['city'] and cand['c'] == ref['city']) else 0, 'name_score': 100, 'penalties': 0}
# Domain
d1, d2 = cand['d'], ref['normalized_domain']
domain_match = 1 if (d1 and d2 and d1 == d2) else 0
domain_match = 1 if (d1 and d2 and d1 != "k.a." and d1 == d2) else 0
# Location
city_match = 1 if (cand['c'] and ref['city'] and cand['c'] == ref['city']) else 0
@@ -176,7 +188,8 @@ class Deduplicator:
ss = fuzz.token_sort_ratio(clean1, clean2)
name_score = max(ts, pr, ss)
else:
name_score = 0
# If cleaning removed everything, fallback to raw fuzzy on normalized names
name_score = fuzz.ratio(n1, n2) if (n1 and n2) else 0
# Penalties
penalties = 0
@@ -194,7 +207,7 @@ class Deduplicator:
total = name_score
if loc_match:
total += 10 # Bonus
total += 10 # Bonus for location match
total -= penalties

View File

@@ -0,0 +1,44 @@
import sys
import os
import logging
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Add backend to path
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from database import Company
from services.deduplication import Deduplicator
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Mock DB or use live DB (safely)
# The config uses /data/companies_v3_fixed_2.db in Docker, but locally it's in the root.
DB_PATH = "../../companies_v3_fixed_2.db"
engine = create_engine(f"sqlite:///{DB_PATH}")
Session = sessionmaker(bind=engine)
db = Session()
def test_matching():
dedup = Deduplicator(db)
test_cases = [
{"name": "Wolfra", "website": "wolfra.de", "city": "Erding"},
{"name": "Wolfra Kelterei", "website": "wolfra.de", "city": "Erding"},
{"name": "Wolfra Fruchtsaft GmbH", "website": "https://www.wolfra.de/", "city": "Erding"},
{"name": "Müller GmbH", "city": "München"}, # Broad search
{"name": "NonExistentCompany", "city": "Berlin"}
]
for case in test_cases:
print(f"\n--- Matching Query: {case['name']} ({case.get('website', 'no-url')}) ---")
results = dedup.find_duplicates(case)
if results:
for i, res in enumerate(results[:3]):
print(f" [{i+1}] Match: {res['name']} (Score: {res['score']}) | CRM ID: {res['crm_id']}")
else:
print(" No matches found.")
if __name__ == "__main__":
test_matching()

View File

@@ -0,0 +1,6 @@
from dotenv import load_dotenv
load_dotenv(override=True)
from superoffice_client import SuperOfficeClient
c = SuperOfficeClient()
res = c.search("Contact?$top=1&$select=contactId,name,department,orgNr,number,business/value,category/value,country/value,address,urlAddress,urls")
print(res)

View File

@@ -73,6 +73,19 @@ def get_company_details(company_id: int) -> dict:
"""Holt die vollständigen Details zu einem Unternehmen."""
return _make_api_request("GET", f"/companies/{company_id}")
def match_company(name: str, website: str = None, city: str = None) -> dict:
"""
Gleicht ein Unternehmen über den zentralen Matching-Service im Company Explorer ab.
Gibt potenzielle Treffer mit Scores und SuperOffice Contact IDs zurück.
"""
payload = {
"name": name,
"website": website,
"city": city,
"country": "Deutschland"
}
return _make_api_request("POST", "/match-company", json_data=payload)
def create_contact(company_id: int, contact_data: dict) -> dict:
"""Erstellt einen neuen Kontakt für ein Unternehmen im Company Explorer."""
payload = {

View File

@@ -18,7 +18,17 @@ from sqlalchemy.orm import sessionmaker
# --- Setup Logging ---
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
import msal
from .models import init_db, ProposalJob, ProposedSlot
# Import centralized matching service
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
try:
from company_explorer_connector import match_company
except ImportError:
logging.warning("Could not import company_explorer_connector. Matching logic might be disabled.")
def match_company(name, **kwargs): return {"match_found": False}
# --- Database Setup (SQLite) ---
class TestLeadPayload(BaseModel):
company_name: str
@@ -351,6 +361,19 @@ def book_slot(job_uuid: str, ts: int):
return HTMLResponse(content=FALLBACK_HTML.format(ms_bookings_url=MS_BOOKINGS_URL))
if create_calendar_invite(job.customer_email, job.customer_company, slot_time):
# NEW: Account Matching in SuperOffice (via Company Explorer)
try:
logging.info(f"MATCHING ACCOUNT: Checking for '{job.customer_company}' before Sale creation...")
match_res = match_company(job.customer_company)
if match_res.get("match_found"):
best = match_res["best_match"]
logging.info(f"✅ MATCH SUCCESS: Found CRM ID {best['crm_id']} for '{best['name']}' (Score: {best['score']})")
# This CRM ID can now be used for the subsequent Sale creation in the next task.
else:
logging.warning(f"⚠️ NO MATCH FOUND in SuperOffice for '{job.customer_company}'. A new account will be needed.")
except Exception as e:
logging.error(f"❌ ERROR during SuperOffice Matching check: {e}")
job.status = "booked"
db.commit()
db.close()