
CloudWalk
Data Analyst
We’re not here to waste your time. Read this carefully and only apply if you’re ready to do real work that matters.
What the job entails:
You’ll be part of a small, sharp team applying Network Science to real-world business problems inside a high-impact AI exploration unit. We’re turning raw data into meaningful graphs, analyzing connections, detecting hidden patterns, and building tools to help autonomous agents make better decisions.
You won’t be optimizing dashboards or stuck tweaking marketing reports. You’ll be building intelligence infrastructure.
Your main tasks:
Transform tabular data into graphs that expose structure and behavior.
Research, prototype, and deploy graph algorithms to detect communities, centrality, anomalies, and more.
Build and maintain ETL pipelines for graph data extraction and enrichment.
Design graph visualizations to support human and machine understanding.
Collaborate with Data Scientists and LLM Engineers to integrate network-based reasoning into our autonomous systems.
Experiment often, document clearly, and ship code that matters.
Technologies / techniques used:
Python (heavy use)
SQL (PostgreSQL, BigQuery)
Graph libraries (e.g. NetworkX, cuGraph, Graphistry)
Visualization tools (Plotly, Dash, D3, etc.)
Neo4j or other graph databases
GCP (preferred) or any major cloud provider
GitHub + CI/CD pipelines
What you’ll need:
Strong Python and SQL skills.
Ability to break down abstract problems into experimental pipelines.
Clear communication skills in Portuguese and English.
Curiosity and initiative. You find answers, you don’t wait for them.
Solid data wrangling and visualization abilities.
Willingness to go deep into Network Science, even if you’re not an expert (yet).
Nice to have:
Experience with NetworkX, Neo4j, or any graph database.
Previous Experience or openness to learn and work with Elixir codebases
Background in graph theory, link prediction, community detection.
Knowledge of probabilistic modeling or LLM integration with graph-based systems.
Familiarity with GCP and large-scale data pipelines.
Recruiting process outline:
Open-ended technical case – you’ll analyze a small graph dataset and extract insights.
Technical interview – discuss your solution and background.
Cultural interview – aligned expectations, mutual fit.
If you’re not interested in doing a real technical assessment or engaging in an honest conversation, don’t apply
To apply for this job please visit jobs.lever.co.