Learn how startups can adopt an AI-first approach to build smarter products, optimize resources, and gain competitive advantages through intelligent automation and data-driven development strategies.
The startup landscape is rapidly evolving, and artificial intelligence has shifted from being a nice-to-have feature to a fundamental competitive necessity. While traditional companies are retrofitting AI capabilities into existing systems, forward-thinking startups have the unique opportunity to build with an AI-first mindset from day one. This approach doesn't just mean adding machine learning models to your product—it means fundamentally reimagining how your startup operates, makes decisions, and delivers value to users.
The distinction between AI-first and AI-enabled approaches represents a paradigm shift that can determine whether your startup becomes a market leader or struggles to keep pace with more intelligent competitors. Companies like Spotify, Notion, and Linear haven't just integrated AI features; they've built their entire product philosophy around intelligent automation, predictive capabilities, and data-driven decision making from the ground up.
This comprehensive guide will walk you through the strategic, technical, and organizational transformations necessary to build a truly AI-first startup. From architectural decisions and data strategy to team structure and risk management, we'll provide concrete frameworks, code examples, and proven methodologies that successful startups use to harness artificial intelligence as their primary competitive advantage.
The fundamental difference between AI-first and AI-enabled approaches lies in timing and philosophy. AI-enabled companies bolt intelligent features onto existing products and processes, treating machine learning as an enhancement layer. AI-first companies, conversely, design every system component—from data collection to user interfaces—with intelligent automation as the core organizing principle.
Core principles of the AI-first mindset center on three foundational pillars: data centricity, intelligent automation, and predictive capabilities. Data centricity means treating data as your most valuable asset, designing every user interaction to generate high-quality training signals. This goes beyond basic analytics to encompass comprehensive behavioral tracking, user preference modeling, and environmental context capture. Intelligent automation involves building systems that learn and improve without constant human intervention, creating feedback loops that enhance performance over time. Predictive capabilities mean anticipating user needs, market changes, and system requirements before they become apparent through traditional metrics.
To assess AI readiness, startups should evaluate their current capabilities across five key dimensions: data infrastructure maturity, technical team AI literacy, product-market fit clarity, regulatory compliance requirements, and financial resources for experimentation. A scoring framework from 1-5 in each area provides a baseline for planning AI implementation timelines and resource allocation strategies.
Mapping traditional product development workflows to AI-enhanced processes requires fundamental rethinking of decision points and feedback mechanisms. Where traditional development relies on predetermined features and static business rules, AI-first development creates adaptive systems that evolve based on user behavior patterns and performance outcomes. This means replacing manual A/B testing with automated experimentation, static content with personalized experiences, and reactive customer support with predictive issue resolution.
Organizational alignment between technical and business stakeholders becomes critical when AI drives core product decisions. Business teams must understand model capabilities and limitations, while technical teams need deep domain knowledge to build relevant intelligence. This requires establishing shared vocabularies, joint success metrics, and collaborative planning processes that integrate AI considerations into every strategic decision.
Designing event-driven architectures that capture comprehensive user interactions forms the backbone of intelligent systems. Every user click, scroll, hover, and timing pattern becomes a potential signal for machine learning models. This requires implementing event streaming platforms like Apache Kafka or Amazon Kinesis that can handle millions of events per second while maintaining order and consistency.
Real-time data pipelines enable continuous learning systems that adapt to changing user behavior patterns instantly. Unlike batch processing that updates models daily or hourly, streaming architectures allow models to incorporate new information within milliseconds, creating responsive experiences that feel truly intelligent to users.
Microservices patterns that enable AI model integration provide the flexibility to experiment with different algorithms, update models independently, and scale intelligence components based on demand. Each microservice should encapsulate specific AI capabilities—recommendation engines, natural language processing, computer vision—while exposing consistent APIs for seamless integration.
API-first design principles for machine learning operations ensure that intelligent features can be consumed across multiple product surfaces and platforms. This means designing model serving endpoints with proper versioning, authentication, rate limiting, and monitoring from the initial implementation.
Building containerized ML infrastructure using Kubernetes and Docker provides the scalability and portability necessary for growing startups. Container orchestration enables automatic scaling of model serving infrastructure based on demand, while maintaining consistent environments across development, staging, and production deployments.
Feature stores provide consistent data serving across models, ensuring that the same user attributes and calculated features are available to all machine learning systems. This prevents inconsistencies that can degrade model performance and create confusing user experiences when different AI components operate on different data representations.
// Real-time feature pipeline implementation with event streaming
import { Kafka, KafkaMessage } from 'kafkajs';
import { RedisClient } from 'redis';
import { Logger } from 'winston';
interface UserEvent {
userId: string;
eventType: string;
timestamp: number;
properties: Record<string, any>;
}
interface FeatureVector {
userId: string;
features: Record<string, number>;
lastUpdated: number;
}
class RealTimeFeaturePipeline {
private kafka: Kafka;
private redis: RedisClient;
private logger: Logger;
constructor(
kafkaBrokers: string[],
redisConfig: any,
logger: Logger
) {
this.kafka = new Kafka({
clientId: 'feature-pipeline',
brokers: kafkaBrokers,
retry: {
retries: 3,
initialRetryTime: 1000,
maxRetryTime: 30000
}
});
this.redis = new RedisClient(redisConfig);
this.logger = logger;
}
async initialize(): Promise<void> {
try {
const consumer = this.kafka.consumer({ groupId: 'feature-group' });
await consumer.connect();
await consumer.subscribe({ topic: 'user-events' });
await consumer.run({
eachMessage: async ({ message }) => {
await this.processEvent(message);
},
});
this.logger.info('Feature pipeline initialized successfully');
} catch (error) {
this.logger.error('Failed to initialize feature pipeline', error);
throw new Error(`Pipeline initialization failed: ${error.message}`);
}
}
private async processEvent(message: KafkaMessage): Promise<void> {
try {
if (!message.value) return;
const event: UserEvent = JSON.parse(message.value.toString());
const currentFeatures = await this.getCurrentFeatures(event.userId);
const updatedFeatures = this.updateFeatures(currentFeatures, event);
await this.storeFeatures(event.userId, updatedFeatures);
// Trigger model inference if significant feature change
if (this.shouldTriggerInference(currentFeatures, updatedFeatures)) {
await this.triggerModelInference(event.userId, updatedFeatures);
}
} catch (error) {
this.logger.error('Error processing event', { error, message });
// Don't throw to avoid stopping the consumer
}
}
private async getCurrentFeatures(userId: string): Promise<FeatureVector | null> {
try {
const cached = await this.redis.get(`features:${userId}`);
return cached ? JSON.parse(cached) : null;
} catch (error) {
this.logger.error('Redis get failed', error);
return null;
}
}
private updateFeatures(
current: FeatureVector | null,
event: UserEvent
): FeatureVector {
const features = current?.features || {};
// Update engagement metrics
features.sessionLength = (features.sessionLength || 0) + 1;
features.lastActivity = event.timestamp;
// Update event-specific features
switch (event.eventType) {
case 'page_view':
features.pageViews = (features.pageViews || 0) + 1;
break;
case 'click':
features.clickCount = (features.clickCount || 0) + 1;
break;
case 'purchase':
features.purchaseCount = (features.purchaseCount || 0) + 1;
features.totalSpent = (features.totalSpent || 0) +
(event.properties.amount || 0);
break;
}
return {
userId: event.userId,
features,
lastUpdated: Date.now()
};
}
private async storeFeatures(
userId: string,
features: FeatureVector
): Promise<void> {
try {
await this.redis.setex(
`features:${userId}`,
3600, // 1 hour TTL
JSON.stringify(features)
);
} catch (error) {
this.logger.error('Failed to store features', error);
throw error;
}
}
private shouldTriggerInference(
current: FeatureVector | null,
updated: FeatureVector
): boolean {
if (!current) return true;
// Trigger if significant change in engagement
const engagementChange = Math.abs(
updated.features.sessionLength - current.features.sessionLength
);
return engagementChange > 10 ||
updated.features.purchaseCount !== current.features.purchaseCount;
}
private async triggerModelInference(
userId: string,
features: FeatureVector
): Promise<void> {
try {
// Publish to model inference topic
const producer = this.kafka.producer();
await producer.send({
topic: 'model-inference',
messages: [{
key: userId,
value: JSON.stringify({
userId,
features: features.features,
timestamp: Date.now()
})
}]
});
await producer.disconnect();
} catch (error) {
this.logger.error('Failed to trigger model inference', error);
}
}
}
Establishing data governance frameworks from day one prevents the data quality disasters that plague many scaling startups. This means implementing schema validation, data lineage tracking, and access control policies before you have thousands of users generating millions of events. Early governance investment pays exponential dividends as data volume grows.
Automated data quality monitoring and validation systems catch anomalies before they corrupt model training or user experiences. These systems should monitor data freshness, completeness, accuracy, and consistency across all collection points, alerting teams immediately when quality degrades below acceptable thresholds.
Synthetic data generation strategies solve the cold start problem that many AI-first startups face—needing intelligent features before having sufficient real user data to train models. Techniques like generative adversarial networks and statistical sampling can create realistic training datasets that bootstrap AI capabilities while preserving user privacy.
Privacy-preserving data collection mechanisms become increasingly important as regulatory requirements tighten and user awareness grows. Implementing differential privacy, federated learning, and on-device processing protects user data while maintaining the rich signals necessary for intelligent systems.
Scalable data warehousing with tools like Snowflake or BigQuery provides the analytical foundation for understanding user behavior patterns and model performance. These platforms offer the compute elasticity and SQL familiarity necessary for both technical and business teams to extract insights from growing datasets.
Real-time feature engineering pipelines transform raw events into meaningful signals that machine learning models can consume effectively. This involves calculating rolling averages, detecting behavioral anomalies, and aggregating multi-dimensional user attributes in milliseconds rather than hours.
// Data quality monitoring system with automated alerts
import { BigQuery } from '@google-cloud/bigquery';
import { PubSub } from '@google-cloud/pubsub';
import { Logger } from 'winston';
import { SlackWebhookClient } from '@slack/webhook';
interface QualityMetric {
name: string;
query: string;
threshold: number;
severity: 'low' | 'medium' | 'high' | 'critical';
}
interface QualityResult {
metric: string;
value: number;
threshold: number;
status: 'pass' | 'warn' | 'fail';
timestamp: number;
}
class DataQualityMonitor {
private bigquery: BigQuery;
private pubsub: PubSub;
private slack: SlackWebhookClient;
private logger: Logger;
private qualityMetrics: QualityMetric[] = [
{
name: 'null_rate_user_events',
query: `
SELECT
COUNTIF(user_id IS NULL OR event_type IS NULL) / COUNT(*) as null_rate
FROM \`project.dataset.user_events\`
WHERE _PARTITIONTIME >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)
`,
threshold: 0.05, // 5% null rate threshold
severity: 'high'
},
{
name: 'duplicate_events',
query: `
SELECT
1 - (COUNT(DISTINCT event_id) / COUNT(*)) as duplicate_rate
FROM \`project.dataset.user_events\`
WHERE _PARTITIONTIME >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)
`,
threshold: 0.01, // 1% duplicate threshold
severity: 'medium'
},
{
name: 'data_freshness',
query: `
SELECT
TIMESTAMP_DIFF(
CURRENT_TIMESTAMP(),
MAX(event_timestamp),
MINUTE
) as minutes_since_last_event
FROM \`project.dataset.user_events\`
`,
threshold: 15, // 15 minutes freshness threshold
severity: 'critical'
},
{
name: 'schema_violations',
query: `
SELECT
COUNTIF(
NOT REGEXP_CONTAINS(event_type, r'^[a-z_]+$') OR
user_id = '' OR
event_timestamp IS NULL
) / COUNT(*) as violation_rate
FROM \`project.dataset.user_events\`
WHERE _PARTITIONTIME >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)
`,
threshold: 0.001, // 0.1% schema violation threshold
severity: 'high'
}
];
constructor(
projectId: string,
slackWebhookUrl: string,
logger: Logger
) {
this.bigquery = new BigQuery({ projectId });
this.pubsub = new PubSub({ projectId });
this.slack = new SlackWebhookClient(slackWebhookUrl);
this.logger = logger;
}
async runQualityChecks(): Promise<QualityResult[]> {
const results: QualityResult[] = [];
for (const metric of this.qualityMetrics) {
try {
const result = await this.executeQualityCheck(metric);
results.push(result);
if (result.status !== 'pass') {
await this.handleQualityIssue(result, metric);
}
} catch (error) {
this.logger.error(`Quality check failed for ${metric.name}`, error);
// Create failure result
results.push({
metric: metric.name,
value: -1,
threshold: metric.threshold,
status: 'fail',
timestamp: Date.now()
});
await this.handleCheckFailure(metric, error);
}
}
return results;
}
private async executeQualityCheck(metric: QualityMetric): Promise<QualityResult> {
const [job] = await this.bigquery.createQueryJob({
query: metric.query,
location: 'US',
jobTimeoutMs: 30000 // 30 second timeout
});
const [rows] = await job.getQueryResults();
if (rows.length === 0) {
throw new Error(`No results returned for metric ${metric.name}`);
}
const value = Object.values(rows[0])[0] as number;
const status = this.evaluateThreshold(value, metric.threshold, metric.name);
return {
metric: metric.name,
value,
threshold: metric.threshold,
status,
timestamp: Date.now()
};
}
private evaluateThreshold(
value: number,
threshold: number,
metricName: string
): 'pass' | 'warn' | 'fail' {
// For freshness metrics, higher values are worse
if (metricName.includes('freshness')) {
if (value <= threshold) return 'pass';
if (value <= threshold * 1.5) return 'warn';
return 'fail';
}
// For rate metrics, higher values are worse
if (value <= threshold) return 'pass';
if (value <= threshold * 1.5) return 'warn';
return 'fail';
}
private async handleQualityIssue(
result: QualityResult,
metric: QualityMetric
): Promise<void> {
// Log the issue
this.logger.warn('Data quality issue detected', {
metric: result.metric,
value: result.value,
threshold: result.threshold,
status: result.status
});
// Publish to monitoring topic
try {
const topic = this.pubsub.topic('data-quality-alerts');
await topic.publishMessage({
data: Buffer.from(JSON.stringify({
...result,
severity: metric.severity
})),
attributes: {
severity: metric.severity,
metric: result.metric
}
});
} catch (error) {
this.logger.error('Failed to publish quality alert', error);
}
// Send Slack notification for high severity issues
if (metric.severity === 'high' || metric.severity === 'critical') {
await this.sendSlackAlert(result, metric);
}
}
private async sendSlackAlert(
result: QualityResult,
metric: QualityMetric
): Promise<void> {
try {
const emoji = result.status === 'fail' ? '🚨' : '⚠️';
const color = result.status === 'fail' ? '#ff0000' : '#ffaa00';
await this.slack.send({
text: `${emoji} Data Quality Alert: ${metric.name}`,
attachments: [{
color,
fields: [
{
title: 'Metric',
value: metric.name,
short: true
},
{
title: 'Current Value',
value: result.value.toFixed(4),
short: true
},
{
title: 'Threshold',
value: metric.threshold.toString(),
short: true
},
{
title: 'Status',
value: result.status.toUpperCase(),
short: true
}
],
footer: 'Data Quality Monitor',
ts: Math.floor(result.timestamp / 1000)
}]
});
} catch (error) {
this.logger.error('Failed to send Slack alert', error);
}
}
private async handleCheckFailure(
metric: QualityMetric,
error: any
): Promise<void> {
this.logger.error(`Quality check execution failed`, {
metric: metric.name,
error: error.message,
severity: 'critical'
});
if (metric.severity === 'critical') {
try {
await this.slack.send({
text: '💥 Critical Data Quality Check Failed',
attachments: [{
color: '#ff0000',
fields: [
{
title: 'Failed Metric',
value: metric.name,
short: false
},
{
title: 'Error',
value: error.message,
short: false
}
],
footer: 'Data Quality Monitor - System Error'
}]
});
} catch (slackError) {
this.logger.error('Failed to send failure alert to Slack', slackError);
}
}
}
async startContinuousMonitoring(intervalMinutes: number = 15): Promise<void> {
this.logger.info(`Starting continuous quality monitoring every ${intervalMinutes} minutes`);
const runChecks = async () => {
try {
const results = await this.runQualityChecks();
this.logger.info('Quality checks completed', {
total: results.length,
passed: results.filter(r => r.status === 'pass').length,
warned: results.filter(r => r.status === 'warn').length,
failed: results.filter(r => r.status === 'fail').length
});
} catch (error) {
this.logger.error('Continuous monitoring cycle failed', error);
}
};
// Run initial check
await runChecks();
// Schedule recurring checks
setInterval(runChecks, intervalMinutes * 60 * 1000);
}
}
Integrating ML experimentation into agile development sprints requires rethinking traditional sprint planning and success metrics. Machine learning experiments follow different timelines than feature development—model training and evaluation can take days or weeks, while traditional features ship in hours or days. Successful AI-first teams run parallel tracks: feature development for immediate user value and ML experimentation for long-term intelligence improvements.
Continuous integration for machine learning models presents unique challenges compared to traditional software CI/CD. Models require data validation, performance regression testing, and bias evaluation alongside standard code quality checks. Automated pipelines should validate model accuracy against holdout datasets, check for data drift that could degrade performance, and ensure model serving infrastructure remains stable under production load.
A/B testing frameworks for AI-powered features must account for the personalized nature of intelligent systems. Unlike static features where all users in the treatment group see identical experiences, AI features deliver different results to different users. This requires sophisticated statistical analysis that accounts for personalization effects while maintaining valid experimental design principles.
Model versioning and rollback strategies become critical as AI features directly impact user experiences. Teams need the ability to instantly revert to previous model versions when performance degrades, while maintaining audit trails of all model changes. This requires treating trained models as versioned artifacts with the same rigor applied to application code.
Automated model performance monitoring systems track key metrics in real-time, alerting teams when model accuracy drops below acceptable thresholds. These systems monitor prediction latency, error rates, input data distribution shifts, and business metrics like conversion rates or user engagement that AI features aim to improve.
User feedback loops for continuous model improvement create virtuous cycles where user interactions improve AI capabilities over time. This requires designing interfaces that capture both explicit feedback (ratings, corrections) and implicit signals (click-through rates, time spent, task completion) that indicate model performance quality.
// AI model serving API with automatic fallback mechanisms
import express, { Request, Response } from 'express';
import { Logger } from 'winston';
import axios from 'axios';
import NodeCache from 'node-cache';
import rateLimit from 'express-rate-limit';
interface PredictionRequest {
userId: string;
features: Record<string, any>;
context?: Record<string, any>;
}
interface PredictionResponse {
prediction: any;
confidence: number;
modelVersion: string;
latency: number;
fallbackUsed: boolean;
}
interface ModelEndpoint {
url: string;
version: string;
timeout: number;
maxRetries: number;
}
class AIModelServingAPI {
private app: express.Application;
private cache: NodeCache;
private logger: Logger;
private primaryModel: ModelEndpoint = {
url: process.env.PRIMARY_MODEL_URL || 'http://ml-primary:8080',
version: 'v1.2.3',
timeout: 2000,
maxRetries: 2
};
private fallbackModel: ModelEndpoint = {
url: process.env.FALLBACK_MODEL_URL || 'http://ml-fallback:8080',
version: 'v1.1.0',
timeout: 5000,
maxRetries: 1
};
private ruleBasedFallback = {
enabled: true,
defaultPrediction: { score: 0.5, category: 'neutral' }
};
constructor(logger: Logger) {
this.app = express();
this.cache = new NodeCache({ stdTTL: 300 }); // 5 minute cache
this.logger = logger;
this.setupMiddleware();
this.setupRoutes();
}
private setupMiddleware(): void {
// Rate limiting
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 1000, // limit each IP to 1000 requests per windowMs
message: 'Too many prediction requests from this IP'
});
this.app.use(limiter);
this.app.use(express.json({ limit: '10mb' }));
// Request logging
this.app.use((req, res, next) => {
const startTime = Date.now();
res.on('finish', () => {
const duration = Date.now() - startTime;
this.logger.info('Request completed', {
method: req.method,
url: req.url,
statusCode: res.statusCode,
duration,
userAgent: req.get('user-agent')
});
});
next();
});
}
private setupRoutes(): void {
this.app.post('/predict', this.handlePrediction.bind(this));
this.app.get('/health', this.handleHealth.bind(this));
this.app.get('/metrics', this.handleMetrics.bind(this));
}
private async handlePrediction(req: Request, res: Response): Promise<void> {
const startTime = Date.now();
try {
const request: PredictionRequest = req.body;
// Input validation
if (!request.userId || !request.features) {
res.status(400).json({
error: 'Missing required fields: userId and features'
});
return;
}
// Check cache first
const cacheKey = this.generateCacheKey(request);
let cachedResult = this.cache.get<PredictionResponse>(cacheKey);
if (cachedResult) {
this.logger.info('Cache hit for prediction', { userId: request.userId });
res.json(cachedResult);
return;
}
// Attempt prediction with fallback chain
const result = await this.predictWithFallback(request, startTime);
// Cache successful results
if (result && !result.fallbackUsed) {
this.cache.set(cacheKey, result);
}
res.json(result);
} catch (error) {
this.logger.error('Prediction request failed', {
error: error.message,
userId: req.body?.userId,
duration: Date.now() - startTime
});
res.status(500).json({
error: 'Internal server error',
fallback: this.ruleBasedFallback.defaultPrediction
});
}
}
private async predictWithFallback(
request: PredictionRequest,
startTime: number
): Promise<PredictionResponse> {
// Try primary model
try {
const primaryResult = await this.callModel(this.primaryModel, request);
// Validate prediction quality
if (this.isValidPrediction(primaryResult)) {
return {
...primaryResult,
modelVersion: this.primaryModel.version,
latency: Date.now() - startTime,
fallbackUsed: false
};
} else {
this.logger.warn('Primary model returned low-quality prediction', {
userId: request.userId,
confidence: primaryResult.confidence
});
}
} catch (error) {
this.logger.error('Primary model failed', {
error: error.message,
userId: request.userId,
modelVersion: this.primaryModel.version
});
}
// Try fallback model
try {
const fallbackResult = await this.callModel(this.fallbackModel, request);
this.logger.info('Using fallback model', {
userId: request.userId,
reason: 'primary_model_failed'
});
return {
...fallbackResult,
modelVersion: this.fallbackModel.version,
latency: Date.now() - startTime,
fallbackUsed: true
};
} catch (error) {
this.logger.error('Fallback model failed', {
error: error.message,
userId: request.userId,
modelVersion: this.fallbackModel.version
});
}
// Use rule-based fallback
if (this.ruleBasedFallback.enabled) {
this.logger.warn('Using rule-based fallback', {
userId: request.userId,
reason: 'all_models_failed'
});
const ruleBasedPrediction = this.generateRuleBasedPrediction(request);
return {
prediction: ruleBasedPrediction,
confidence: 0.1, // Low confidence for rule-based
modelVersion: 'rule-based-v1',
latency: Date.now() - startTime,
fallbackUsed: true
};
}
throw new Error('All prediction methods failed');
}
private async callModel(
model: ModelEndpoint,
request: PredictionRequest
): Promise<any> {
let lastError: Error | null = null;
for (let attempt = 1; attempt <= model.maxRetries; attempt++) {
try {
const response = await axios.post(
`${model.url}/predict`,
{
features: request.features,
context: request.context
},
{
timeout: model.timeout,
headers: {
'Content-Type': 'application/json',
'X-User-ID': request.userId,
'X-Model-Version': model.version
}
}
);
if (response.status === 200 && response.data) {
return response.data;
} else {
throw new Error(`Invalid response: ${response.status}`);
}
} catch (error) {
lastError = error;
this.logger.warn(`Model call attempt ${attempt} failed`, {
modelUrl: model.url,
modelVersion: model.version,
attempt,
error: error.message
});
// Wait before retry (exponential backoff)
if (attempt < model.maxRetries) {
await this.sleep(Math.pow(2, attempt - 1) * 100);
}
}
}
throw lastError || new Error('Model call failed after all retries');
}
private isValidPrediction(prediction: any): boolean {
return (
prediction &&
typeof prediction.confidence === 'number
Discover how artificial intelligence transforms software development ROI through automated testing, intelligent code review, and predictive project management in enterprise mobile applications.
Read ArticleLearn how startups can integrate AI validation throughout their mobile app development lifecycle to reduce time-to-market, minimize development costs, and build products users actually want.
Read ArticleLet's discuss how we can help bring your mobile app vision to life with the expertise and best practices covered in our blog.