At Near, we help top talent in Latin America find remote roles with US companies. Our mission is to create better lives by fostering a remote work culture that transcends borders.
This is a role for our client, an AI-powered workflow automation platform designed to simplify and streamline business processes. Their team is growing and they are looking to add a new member to their crew, so if you're looking to be part of a dynamic team you can join them on their mission as an Data & Automation Engineer.
Overview
We’re hiring a full-time Data & Automation Engineer to build systems that power Harmony from the inside out. You’ll own scraping, enrichment, automation, and help shape core parts of our internal tools and customer-facing platform.
This is a hybrid role — part data engineer, part automation architect, part full-stack builder. You’ll work closely with the founders to move fast, experiment boldly, and lay the foundations of the product and company.
You won’t be a cog in a machine. You’ll be building the machine.
Responsibilities:
- Web Scraping & Data Systems
- Scrape platforms like Reddit, Twitter/X, LinkedIn, Product Hunt, and startup directories
- Handle anti-bot protections, proxies, headless browsers, rate limits, and structured output
- Structure, clean, and enrich data using APIs like Clearbit, PDL, Clay, Hunter, and others
- Deliver clean, usable datasets into Airtable, Notion, dashboards, and internal tools
- Automation & Internal Workflows
- Build automations that eliminate repetitive tasks across marketing, ops, and outreach
- Tools you’ll use: Zapier, Make, Browserflow, Gumloop, NAN, and custom Python scripts
- Connect systems across Slack, Notion, Airtable, Gmail, LinkedIn, Webflow, and more
- Set up live, scalable workflows that save us hundreds of hours per month
- Platform & Web Engineering
- Contribute to the Harmony platform UI (React + Next.js)
- Build lightweight internal tools, dashboards, and experimental landing pages
- Spin up launch campaigns, lead-gen tools, and microsites quickly with clean code and clear UI
- Bonus: AI & Intelligent Agents
- Work with GPT-4, Claude, LangChain, or other LLM tools to enrich and summarize data
- Help us prototype intelligent workflows that act on scraped/enriched info autonomously
Requirements
Must-Have:
- 4–5 years of experience as a Data Scientist or Full Stack Developer.
- Advanced proficiency in Python, with hands-on experience in web scraping, scripting, and data handling (including APIs, JSON, and pagination).
- Experience using scraping tools such as Playwright, Puppeteer, BeautifulSoup, or Scrapy, and familiarity with anti-bot strategies, proxies, and headless browsers.
- Skilled at integrating and working with REST APIs, including handling authentication, rate limits, and data transformation.
- Strong understanding of structured data formats (CSV, JSON) and experience working with tools like Airtable, Notion, and Google Sheets.
- Familiarity with automation tools such as Zapier, Make, Browserflow, or similar.
- Experience using React and Next.js to build UI components, internal dashboards, or microsites.
- Comfortable working independently in a fast-paced, high-ownership environment—able to prioritize speed while building scalable, reliable solutions.
- Possess the ability to work Monday to Friday, 8:00 AM to 6:00 PM PST.
Nice-to-Have:
- Prior experience working in a startup environment or as part of a founding/early team.
- Exposure to LLM tools like GPT-4, Claude, LangChain, and experience integrating them into intelligent workflows.
- Familiarity with tools such as Supabase, Firebase, Retool, or the Notion API.
- Built internal tools, launch campaigns, or growth/lead-gen microsites that support marketing, sales, or operations teams.
- Strong attention to usability and a good eye for clean UI/UX, even when building internal tools.
Perks - What’s in it for you?
- Competitive compensation in US dollars
- Remote, Full-Time Position
- 12 Public holidays + 2 weeks of PTO