Summary

OpenAI에서는 Critic 모델을 도입하여 AI 코드 평가 신뢰성을 높였습니다. Critic 모델은 인간보다 코드의 오류를 더 잘 잡아내며, 인간 평가자와 협력하여 성과를 극대화합니다. NVIDIA는 AI 클라우드 제공업체를 위한 새로운 참조 아키텍처를 발표하여 AI 솔루션 배포 시간을 단축하고 비용을 절감하는 동시에 성능을 최적화합니다. 새로운 연구에서는 10억 개의 페르소나를 활용한 데이터 생성 방법을 제안하여 데이터의 다양성과 확장성을 극대화합니다. Figma는 AI 기능을 중심으로 한 다양한 디자인 도구를 업데이트하였으며, Groq는 Whisper Large V3의 성능을 대폭 향상시켰습니다. SK그룹은 AI와 반도체 분야에 2026년까지 80조 원을 투자할 계획을 발표하였습니다.

OpenAI, Critic 모델로 AI 코드 평가 신뢰성 향상

링크, 2024년 6월 28일,
OpenAI

  • Critic 모델은 인간보다 코드의 오류를 더 잘 잡아내며, 코드 평가의 정확성을 높임
  • CriticGPT는 RLHF를 사용하여 자연어 피드백을 생성하고 코드의 문제를 강조
  • Critic 모델은 때때로 허구의 오류를 생성하여 인간을 혼란스럽게 할 수 있음
  • Critic 모델의 도입으로 AI와 인간 평가자의 팀이 유사한 수의 오류를 잡아내며, 인간 평가자의 오류 수를 줄임
  • Critic 모델은 ChatGPT 훈련 데이터의 수백 가지 오류를 성공적으로 식별
  • Critic 모델은 코드 이외의 작업에서도 효과적임
  • Force Sampling Beam Search 기법을 도입하여 실제 오류와 허구의 오류를 균형 있게 감지

NVIDIA, AI 클라우드 제공업체를 위한 새로운 레퍼런스 아키텍처 발표

링크, 2024년 6월 26일,
NVIDIA

  • NVIDIA 클라우드 파트너 참조 아키텍처는 고성능, 확장성, 보안을 갖춘 데이터 센터 구축을 위한 청사진을 제공
  • GPU 서버, 스토리지, 네트워킹, 관리 솔루션, AI 소프트웨어 포함
  • AI 솔루션 배포 시간을 단축하고 비용 절감 효과를 제공
  • 다양한 AI 및 LLM 워크로드를 지원하여 클라우드 제공업체가 AI 서비스를 제공할 수 있도록 지원
  • NVIDIA Quantum-2 InfiniBand 및 Spectrum-X Ethernet 네트워킹을 통해 빠르고 효율적인 통신을 제공
  • NVIDIA BlueField-3 DPUs는 고성능 북남 네트워크 연결을 제공하고, 데이터 저장 가속, 탄력적 GPU 컴퓨팅 및 제로 트러스트 보안을 가능하게 함
  • NVIDIA AI Enterprise 소프트웨어는 클라우드 제공업체가 서버를 프로비저닝하고 관리할 수 있도록 지원
  • NVIDIA NeMo 프레임워크를 통해 클라우드 제공업체가 생성 AI 모델을 훈련하고 미세 조정할 수 있도록 함
  • NVIDIA Riva는 음성 서비스를 제공
  • NVIDIA RAPIDS는 Spark 워크로드를 가속화

10억 개의 페르소나를 활용한 데이터 생성 방법 제안

링크, 2024년 6월 28일,
Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, Dong Yu

  • Persona Hub라는 10억 개의 페르소나를 자동으로 웹 데이터에서 수집하여 데이터 생성
  • 다양한 시나리오에서 사용 가능한 고품질의 수학 및 논리적 추론 문제, 지식 풍부한 텍스트 등을 생성
  • MATH 평가에서 높은 성과를 보이며 GPT-4 수준의 성능을 달성
  • 데이터 생성의 다양성과 확장성을 극대화하여 LLM 연구 및 개발에 기여
  • 기존의 인스턴스 기반 접근 방식이나 핵심 포인트 기반 접근 방식보다 커버리지, 품질 및 관점을 확장하여 데이터 생성 과정의 견고성을 강화
  • 다양한 용도로 사용 가능한 데이터 세트를 생성하여 MATH, 논리적 추론 문제, 사용자 지시문, 게임 NPC, 도구 개발 등에 활용할 수 있음

Figma, AI 기능을 중심으로 한 다양한 업데이트 발표

링크, 2024년 6월 30일,
Figma

  • ‘Make Design’ 기능을 통해 텍스트 설명으로 디자인을 생성할 수 있음
  • ‘Search for Similar’ 기능을 사용하여 유사한 요소를 빠르게 찾을 수 있음
  • 이미지 배경 제거, 다국어 번역, 레이어 명 자동 정리, 프로토타입 자동 생성 등 다양한 AI 기능 포함
  • 디자인 과정의 효율성을 극대화하고 사용자 경험을 크게 개선
  • AI 자동 생성 기능을 통해 작업 흐름을 유지할 수 있도록 도움
  • 이번 업데이트는 AI를 통해 디자인 작업을 보다 효율적으로 수행할 수 있도록 도움
  • 사용자 경험을 개선하여 디자인과 개발 간의 협업을 더욱 원활하게 만듦

Groq, Whisper Large V3 성능 대폭 향상

링크, 2024년 6월 28일,
Groq

  • Whisper Large V3를 GroqCloud™를 통해 개발자 커뮤니티에 제공
  • 10분 길이의 오디오 파일을 3.7초 만에 전사하는 164배 속도 달성
  • Word Error Rate (WER)를 10.3%로 최소화하여 최고 성능 달성
  • AI 음성 경험을 위한 저지연 전사 성능 제공
  • Whisper Large V3는 AI 음성 인식 및 음성 번역을 위한 사전 훈련된 모델
  • Groq의 LPU™ 추론 엔진을 통해 저지연 AI 추론을 가능하게 함
  • GroqCloud™에서 제공되어 개발자들이 Whisper를 쉽게 사용할 수 있음
  • 프로젝트 Media QA에서 Whisper 성능을 확인할 수 있음

SK그룹, AI와 반도체 분야에 2026년까지 80조 원 투자

링크, 2024년 6월 30일,
SK그룹

  • SK그룹은 AI와 반도체를 비롯한 미래 성장 분야에 80조 원을 투자할 계획
  • 급변하는 시장에 대응하고 선택과 집중을 통해 질적 성장 추구
  • 2026년까지 수익성 개선, 사업구조 최적화, 시너지 제고 등을 통해 80조 원의 재원 확보
  • SK하이닉스는 5년간 103조 원을 투자하여 반도체 사업 경쟁력 강화
  • 7월 1일부로 수펙스추구협의회에 반도체위원회를 신설
  • CEO들은 전체 계열사 수를 ‘관리 가능한 범위’로 조정할 필요성에 공감하고, 이를 단계적으로 추진
  • 현재 SK의 계열사는 총 219곳으로, 이를 최적화하여 관리 범위를 조정할 계획
Sources

This GPT assists users by creating a detailed daily newspaper in Korean based on provided links. It follows these steps: read the content, summarize each content with detailed points, and write a report. The report format is:

(today’s date in 년 월 일) AI 소식,

Summary

(overall short summary, make summary with good details. for Summary section, explain the details starting with company name, e.g. OpenAI에서는 ~~~를 발표하였습니다.)

Title,

한글제목

링크, date,
company name

  • detailed summary1, (개조식 문체 사용)
  • detailed summary2, (개조식 문체 사용)
  • detailed summary N, (개조식 문체 사용)

Title,

한글제목

링크, date,
company name

  • detailed summary1, (개조식 문체 사용)
  • detailed summary2, (개조식 문체 사용)
  • detailed summary N, (개조식 문체 사용)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
###
https://cdn.openai.com/llm-critics-help-catch-llm-bugs-paper.pdf
OpenAI
Jun 28, 2024
Abstract:
Reinforcement learning from human feedback (RLHF) is fundamentally limited
by the capacity of humans to correctly evaluate model output. To improve human
evaluation ability and overcome that limitation this work trains “critic” models
that help humans to more accurately evaluate model-written code. These critics
are themselves LLMs trained with RLHF to write natural language feedback
highlighting problems in code from real-world assistant tasks. On code containing
naturally occurring LLM errors model-written critiques are preferred over human
critiques in 63% of cases, and human evaluation finds that models catch more bugs
than human contractors paid for code review. We further confirm that our fine-tuned
LLM critics can successfully identify hundreds of errors in ChatGPT training data
rated as “flawless”, even though the majority of those tasks are non-code tasks
and thus out-of-distribution for the critic model. Critics can have limitations of
their own, including hallucinated bugs that could mislead humans into making
mistakes they might have otherwise avoided, but human-machine teams of critics
and contractors catch similar numbers of bugs to LLM critics while hallucinating
less than LLMs alone.

Summary
Blog Post: Improving AI Reliability with Critic Models for Better Code Evaluation
In the swiftly changing world of artificial intelligence (AI), guaranteeing the reliability of AI-generated outputs is increasingly crucial. This is particularly true for AI models that generate or evaluate code, which can occasionally contain subtle bugs or errors not immediately noticeable. These errors are risky in enterprise environments where accuracy is essential. Introducing critic models, which assess and critique model outputs, offers a promising solution to enhance AI reliability, especially in code evaluation.
Understanding Critic Models
Critic models, such as CriticGPT, are a new development designed to improve the evaluation of AI-generated outputs, including code. Unlike traditional methods that rely on human feedback, critic models use a sophisticated training process to identify errors that humans might miss. However, they also face challenges, such as mistakenly identifying errors that don’t exist.
notion image

How Critic Models Are Trained and Evaluated
The training and evaluation of critic models involve several key steps and criteria:
Comprehensiveness: They must cover all significant issues in the code.
Critique-Bug Inclusion (CBI): They should pinpoint specific, known bugs.
Minimizing false positives: Avoiding the identification of non-existent issues.
Helpfulness and style: The critiques should be constructive and clear.
These models are assessed through blind tests and compared using Elo scores, offering a detailed analysis of their performance.
Training Process
Training critic models involves generating critiques for code, which are then rated by human evaluators. These ratings help train a reward model that further refines the critic models' accuracy.
Breakthrough Results with Critic Models
Critic models have shown promising results. For instance, CriticGPT has surpassed human evaluators in identifying bugs, indicating a significant advancement in AI-assisted code evaluation. Combining these models with human evaluators leads to even better performance. Additionally, techniques like Force Sampling Beam Search have improved the balance between detecting real and imagined issues, enhancing evaluation reliability.

notion image
notion image
Expanding the Use of Critic Models
The application of critic models in code evaluation is just the beginning. These models are part of broader research into making AI more self-corrective and reliable across various coding tasks. Understanding their role helps us see their potential to revolutionize the field.
Future Directions and Challenges
Critic models are paving the way for AI that is not only more reliable but also capable of self-assessment. However, challenges such as potential biases and distinguishing between different types of errors need to be addressed.
Conclusion
Critic models offer a significant improvement in ensuring the reliability of AI-generated code. By critiquing and evaluating code more accurately, they enhance human evaluators' ability to spot and fix errors. As we refine these models, we edge closer to AI systems that are not just effective but also inherently safe. For AI engineers in enterprise settings, this represents an exciting opportunity to lead in the application of critic models, contributing to the development of AI that is both powerful and dependable. This journey marks a step towards a future where AI and humans collaborate more seamlessly, unlocking new possibilities.

Is OpenAI following Anthropic? LLM Critics Help Catch LLM Bugs is the latest paper from OpenAI describing how LLM Critiques and AI Feedback help to improve RLHF and data quality and outscale human experts. 👀
CriticGPT is an autoregressive language model trained with RLHF (InstructGPT and ChatGPT) to accept a question-answer pair as input and output a structured critique that highlights potential problems in the answer. 💡 - Pretty similar to Anthropics Constitutional AI method.
RLHF pipeline to train CritiqueGPT, similar to ChatGPT:
1️⃣ Step 1: Generate several critiques for each (question, answer) pair in the dataset by AI & Contractors.
2️⃣ Step 2: Contractors rated the attributes of the sampled critiques, including overall quality.
3️⃣ Step 3: Train a reward model to predict the human overall quality rankings.
4️⃣ Step 4: Train CritiqueGPT using PPO and Reward Model
Insights
🐛 Used “Tampering” Humans added bugs in code and wrote a critique about it
🔍 CriticGPT identified hundreds of errors in ChatGPT data
📊 Used Preference scores (B>A>D>C) on a 1-7 ordinal scale for RLHF
⏱️ Humans needed 50 minutes per example to write critiques.
🤖 The reward model was trained on a mix of ChatGPT and CriticGPT
🚀 Introduce Force Sampling Beam Search (FSBS) which uses Reward Model to improve outputs
🖥️ CriticGPT was fine-tuned with less computing than ChatGPT.
📝 Used Prompts from Reward Modelling dataset for PPO


###
https://blogs.nvidia.com/blog/ai-cloud-providers-reference-architecture/?ncid=so-link-519834
NVIDIA Unveils Reference Architecture for AI Cloud Providers
June 26, 2024 by Marc Hamilton
Share

NVIDIA has announced a new reference architecture for cloud providers that want to offer generative AI services to their customers.

The NVIDIA Cloud Partner reference architecture is a blueprint for building high-performance, scalable and secure data centers that can handle generative AI and large language models (LLMs).

The reference architecture enables NVIDIA Cloud Partners within the NVIDIA Partner Network to reduce the time and cost of deploying AI solutions, while ensuring compatibility and interoperability among various hardware and software components.

The architecture will also help cloud providers meet the growing demand for AI services from organizations — of all sizes and industries — that want to leverage the power of generative AI and LLMs without investing in their own infrastructure.

Generative AI and LLMs are transforming the way organizations solve complex problems and create new value. These technologies use deep neural networks to generate realistic and novel outputs, such as text, images, audio and video, based on a given input or context. Generative AI and LLMs can be used for a variety of applications, such as copilots, chatbots and other content creation.

However, generative AI and LLMs also pose significant challenges for cloud providers, which need to provide the infrastructure and software to support these workloads. The technologies require massive amounts of computing power, storage and network bandwidth, as well as specialized hardware and software to optimize performance and efficiency.

For example, LLM training involves many GPU servers working together, communicating constantly among themselves and with storage systems. This translates to east-west and north-south traffic in data centers, which requires high-performance networks for fast and efficient communication.

Similarly, generative AI inference with larger models needs multiple GPUs to work together to process a single query.

Moreover, cloud providers need to ensure that their infrastructure is secure, reliable and scalable, as they serve multiple customers with different needs and expectations. Cloud providers also need to comply with industry standards and best practices, as well as provide support and maintenance for their services.

The NVIDIA Cloud Partner reference architecture addresses these challenges by providing a comprehensive, full-stack hardware and software solution for cloud providers to offer AI services and workflows for different use cases. Based on the years of experience NVIDIA has in designing and building large-scale deployments both internally and for customers, the reference architecture includes:

GPU servers from NVIDIA and its manufacturing partners, featuring NVIDIA’s latest GPU architectures, such as Hopper and Blackwell, which deliver unparalleled compute power and performance for AI workloads.
Storage offerings from certified partners, which provide high-performance storage optimized for AI and LLM workloads. The offerings also include those tested and validated for NVIDIA DGX SuperPOD and NVIDIA DGX Cloud. They are proven to be reliable, efficient and scalable.
NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking, which provide a high-performance east-west network for fast and efficient communication between GPU servers.
NVIDIA BlueField-3 DPUs, which deliver high-performance north-south network connectivity and enable data storage acceleration, elastic GPU computing and zero-trust security.
In/out-of-band management solutions from NVIDIA and management partners, which provide tools and services for provisioning, monitoring and managing AI data center infrastructure.
NVIDIA AI Enterprise software, including:
NVIDIA Base Command Manager Essentials, which helps cloud providers provision and manage their servers.
NVIDIA NeMo framework, which helps cloud providers train and fine-tune generative AI models.
NVIDIA NIM, a set of easy-to-use microservices designed to accelerate deployment of generative AI across enterprises.
NVIDIA Riva, for speech services.
NVIDIA RAPIDS accelerator for Spark, to accelerate Spark workloads.
The NVIDIA Cloud Partner reference architecture offers the following key benefits to cloud providers:

Build, Train and Go: NVIDIA infrastructure specialists use the architecture to physically install and provision the cluster for faster rollouts for cloud providers.
Speed: By incorporating the expertise and best practices of NVIDIA and partner vendors, the architecture can help cloud providers accelerate the deployment of AI solutions and gain a competitive edge in the market.
High Performance: The architecture is tuned and benchmarked with industry-standard benchmarks, ensuring optimal performance for AI workloads.
Scalability: The architecture is designed for cloud-native environments, facilitating the development of scalable AI systems that offer flexibility and can seamlessly expand to meet increasing demand of end users.
Interoperability: The architecture ensures compatibility among various components of the architecture, making integration and communication between components seamless.
Maintenance and Support: NVIDIA Cloud Partners have access to NVIDIA subject-matter experts, who can help address any unexpected challenges that may arise during and after deployment.
The NVIDIA Cloud Partner reference architecture provides a proven blueprint for cloud providers to stand up and manage high-performance scalable infrastructure for AI data centers.

See notice regarding software product information.



###
https://arxiv.org/abs/2406.20094
[Submitted on 28 Jun 2024]
Scaling Synthetic Data Creation with 1,000,000,000 Personas
Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, Dong Yu
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub -- a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world's total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub's use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development.
This is one of the coolest ideas for scaling synthetic data that I've come across.
Proposes 1 billion diverse personas to facilitate the creation of diverse synthetic data for different scenarios.
It's easy to generate synthetic data but hard to scale up its diversity which is essential for its application.
This paper proposes a novel persona-driven data synthesis methodology to generate diverse and distinct data covering a wide range of perspectives.
Previous works synthesize data using either instance-driven approaches (e.g., using seed corpus) or key-point-driven methods (e.g., using topic/subject). Both of these approaches lack the desired coverage, quality, and perspectives needed to robustly scale the data synthesis process.
To measure the quality of the synthetic datasets, they performed an out-of-distribution evaluation on MATH. A fine-tuned model on their synthesized 1.07M math problems achieves 64.9% on MATH, matching the performance of gpt-4-turbo-preview at only a 7B scale.
Their method is not only effective for MATH problems, but it can also be used to generate logical reasoning problems, instructions, game NPCs, tool development, knowledge-rich text, and many more use cases.

###
https://www.youtube.com/watch?v=n5gJgkO2Dg0&ab_channel=Figma
Config 2024에서 Figma는 혁신적인 AI 기능을 중심으로 다양한 업데이트를 발표했습니다. Figma AI는 사용자가 텍스트 설명으로 디자인을 생성할 수 있는 'Make Design' 기능, 유사한 요소를 빠르게 찾을 수 있는 'Search for Similar' 기능, 그리고 작업 흐름을 유지하도록 돕는 AI 자동 생성 기능을 포함하고 있습니다. 또한, 이미지 배경 제거, 다국어 번역, 레이어 명 자동 정리, 프로토타입 자동 생성 등 AI를 활용한 다양한 기능을 통해 디자인 과정을 혁신적으로 변화시킵니다. Figma AI는 작업 효율성을 극대화하고, 디자인과 개발 간의 협업을 더욱 원활하게 만들어 줍니다. 이번 업데이트는 AI를 통해 사용자 경험을 크게 개선하고, 디자인 작업을 보다 효율적으로 수행할 수 있도록 돕습니다.


###
https://wow.groq.com/groq-runs-whisper-large-v3-at-a-164x-speed-factor-according-to-new-artificial-analysis-benchmark/
Groq Runs Whisper Large V3 at a 164x Speed Factor According to New Artificial Analysis Benchmark
Written by:
Groq
Whisper Large V3 Is Now Available to the Developer Community via GroqCloud™
We’re excited to announce Groq is officially running Whisper Large V3 on the LPU™ Inference Engine, available to our developer community via GroqCloud™ through our Developer Playground. Whisper is a pre-trained model for automatic speech recognition and speech translation, trained on 680k hours of labeled data. Whisper and models like it are paving the way for accurate and seamless GenAI voice experiences while broadening the possibilities on developer application and use cases, both of which require low-latency AI inference.

This also marks an addition to the expanding GenAI model portfolio hosted by Groq. Large Language Models (LLMs) continue to run on the Groq LPU, the addition of Whisper Large V3 is another step on our way to multi-modal.

Artificial Analysis has included our Whisper performance in their latest independent speech-to-text benchmark.

Dive into the results below. To see see this model in action, check out Project Media QA on GroqLabs. If you are a developer interested in Whisper running on Groq, sign up for access via GroqCloud at console.groq.com.


Artificial Analysis has independently benchmarked Whisper Large V3 on Groq as achieving a Speed Factor of 164. This means Groq can transcribe our 10-minute audio test file in just 3.7 seconds. Low latency transcription is a critical component for seamless voice experiences. AI voice experiences require low latency inference on transcription, language, and voice models to enable immediate responses that keep users engaged.

- Micah Hill-Smith, Co-founder & CEO, ArtificialAnalysis.ai
Repost
Speed Factor

Measured as input audio seconds transcribed per second, Groq clocks in at a speed factor rate of 164x real-time, the fastest implementation of the base Whisper Large V3 model.


Quality

Artificial Analysis defines Word Error Rate (WER) as the percentage of of words transcribed incorrectly. Groq minimized its Word Error rate to 10.3% for Whisper Large V3, matching the lowest WER from other providers on the leaderboard


Price

Artificial Analysis defines price as USD per 1000 minutes of audio, bringing the Groq price to $0.5 based on offering Whisper Large V3 at a price of $0.03 per hour transcribed.

###
https://n.news.naver.com/article/032/0003305572?cds=news_my
최태원 회장 “AI 분야 선제적 대응”…SK, 2026년까지 80조원 집중 투입
입력2024.06.30. 오후 8:40 기사원문
이진주 기자
32
84
본문 요약봇
텍스트 음성 변환 서비스 사용하기
글자 크기 변경하기
SNS 보내기
인쇄하기
‘계열사 재조정’ 단계적 추진



SK그룹이 2026년까지 80조원의 재원을 확보해 인공지능(AI)과 반도체를 비롯한 미래 성장 분야에 투자한다.

급변하는 시장에 선제적으로 대응하고 ‘선택과 집중’을 통해 질적 성장을 꾀한다는 전략이다.

SK는 지난 28~29일 경기 이천 SKMS연구소에서 최태원 회장(사진), 최재원 수석부회장, 최창원 수펙스추구협의회 의장, 주요 계열사 최고경영자(CEO) 20여명 등이 참석한 가운데 경영전략회의를 열고 이 같은 전략 방향에 뜻을 모았다고 30일 밝혔다.

이번 회의에는 최 회장의 장녀인 최윤정 SK바이오팜 사업개발본부장(부사장)이 처음 참석한 것으로 알려졌다. 미국 출장 중인 최 회장은 화상으로 회의에 참석해 “‘새로운 트랜지션(전환) 시대’를 맞아 미래 준비 등을 위한 선제적이고 근본적인 변화가 필요하다”고 강조했다.

최 회장은 “지금 미국에서는 AI 말고는 할 얘기가 없다고 할 정도로 AI 관련 변화의 바람이 거세다”며 “그룹 보유 역량을 활용해 AI 서비스부터 인프라까지 ‘AI 밸류체인(가치사슬) 리더십’을 강화해야 한다”고 주문했다.

최 회장은 SK가 강점을 가진 에너지 솔루션 분야도 글로벌 시장에서 AI 못지않은 성장 기회를 확보할 수 있을 것으로 전망했다.

SK 경영진은 이번 회의에서 수익성 개선과 사업구조 최적화, 시너지 제고 등으로 2026년까지 80조원의 재원을 확보하고, 이를 AI와 반도체 등 미래 성장 분야 투자와 주주 환원 등에 활용하기로 의견을 모았다.

또 운영 개선을 통해 3년 내 30조원의 잉여현금흐름(FCF)을 만들어 부채비율을 100% 이하로 관리한다는 목표도 세웠다. SK는 지난해 10조원 적자를 기록한 세전이익이 올해는 흑자로 전환해 22조원 안팎에 이를 것으로 예상했다.

SK하이닉스는 2028년까지 향후 5년간 총 103조원을 투자해 반도체 사업 경쟁력을 강화하기로 했다. 이 중 약 80%에 해당하는 82조원은 HBM 등 AI 관련 사업에 투자한다.

7월1일부로 수펙스추구협의회에 ‘반도체위원회’도 신설한다. 위원장은 곽노정 SK하이닉스 사장이 맡는다.

CEO들은 전체 계열사 수를 ‘관리 가능한 범위’로 조정할 필요성이 있다는 데 공감하고, 각 사별 내부 절차를 거쳐 이를 단계적으로 추진하기로 했다. 현재 SK의 계열사는 총 219곳으로, 삼성(63곳) 등 주요 그룹과 비교해도 많다는 지적이 나온다.