Artwork

محتوای ارائه شده توسط Informa TechTarget. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Informa TechTarget یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

A year in review with the Targeting AI podcast

44:08
 
اشتراک گذاری
 

Manage episode 431291737 series 3493557
محتوای ارائه شده توسط Informa TechTarget. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Informa TechTarget یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

For the past year, the Targeting AI podcast has explored a broad range of AI topics, none more than the fast-evolving and sometimes startling world of generative AI technology.

From the first guest, Michael Bennett, AI policy adviser at Northeastern University, the podcast has focused intently on the popularization of generative AI, while also touching on traditional AI.

While that first episode centered on the prospects of AI regulation, Bennett also spoke about some of the controversies then emerging in the nascent stages of generative AI.

"Organizations who have licenses to use and to sell photographers' works are pushing back,” Bennett said during the inaugural episode of the Targeting AI podcast.

While Bennett's point of view illuminated the regulatory and ethical dimensions of the explosively growing technology, Michael Stewart, a partner at Microsoft's venture firm M12, discussed the startup landscape.

With the rise of foundation model providers such as Anthropic, Cohere and OpenAI, generative AI startups for the last 12 months chose to partner with and be subsidized by cloud giants -- namely Microsoft, Google and AWS –-- instead of seeking to be acquired.

"This is a very ripe environment for startups that have a partnership mindset to work with the main tech companies,” Stewart said during the popular episode, which was downloaded more 1,000 times.

The early stages of generative AI were marked by accusations of data misuse, particularly from artists, writers and authors.

Our Targeting AI podcast hosts have also spoken to guests about data ownership and how large language models are affecting industries such as the music business.

The podcast also explored new regulatory frameworks like President Joe Biden's executive order on AI.

With some 27 guests from a diverse group of vendors and other organizations, the podcast took shape and laid the groundwork for a second year with plenty of new developments to explore.

Coming up soon are episodes on Democratic presidential candidate Kamala Harris’ stances on AI and big tech antitrust actions, election deepfakes and tech giant Oracle's foray into generative AI.

Listen to Targeting AI on Apple Podcasts, Spotify and all major podcast platforms, plus on TechTarget Editorial’s enterprise AI site.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.

  continue reading

50 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 431291737 series 3493557
محتوای ارائه شده توسط Informa TechTarget. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Informa TechTarget یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

For the past year, the Targeting AI podcast has explored a broad range of AI topics, none more than the fast-evolving and sometimes startling world of generative AI technology.

From the first guest, Michael Bennett, AI policy adviser at Northeastern University, the podcast has focused intently on the popularization of generative AI, while also touching on traditional AI.

While that first episode centered on the prospects of AI regulation, Bennett also spoke about some of the controversies then emerging in the nascent stages of generative AI.

"Organizations who have licenses to use and to sell photographers' works are pushing back,” Bennett said during the inaugural episode of the Targeting AI podcast.

While Bennett's point of view illuminated the regulatory and ethical dimensions of the explosively growing technology, Michael Stewart, a partner at Microsoft's venture firm M12, discussed the startup landscape.

With the rise of foundation model providers such as Anthropic, Cohere and OpenAI, generative AI startups for the last 12 months chose to partner with and be subsidized by cloud giants -- namely Microsoft, Google and AWS –-- instead of seeking to be acquired.

"This is a very ripe environment for startups that have a partnership mindset to work with the main tech companies,” Stewart said during the popular episode, which was downloaded more 1,000 times.

The early stages of generative AI were marked by accusations of data misuse, particularly from artists, writers and authors.

Our Targeting AI podcast hosts have also spoken to guests about data ownership and how large language models are affecting industries such as the music business.

The podcast also explored new regulatory frameworks like President Joe Biden's executive order on AI.

With some 27 guests from a diverse group of vendors and other organizations, the podcast took shape and laid the groundwork for a second year with plenty of new developments to explore.

Coming up soon are episodes on Democratic presidential candidate Kamala Harris’ stances on AI and big tech antitrust actions, election deepfakes and tech giant Oracle's foray into generative AI.

Listen to Targeting AI on Apple Podcasts, Spotify and all major podcast platforms, plus on TechTarget Editorial’s enterprise AI site.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.

  continue reading

50 قسمت

Wszystkie odcinki

×
 
Generative AI has led to many fears about the workforce. However, for work management platform vendor Asana, GenAI and agentic AI can be effective tools in the workforce. Instead of replacing humans, AI technology can work alongside humans. Despite the potential for collaboration, not all tasks require the use of AI technology. Featuring: Saket Srivastava, CIO of work management platform, Asana. In today's episode, we cover: The collaboration between AI technology and humans Employees need training and support in AI How GenAI can significantly improve project management tasks and more. To learn more about AI and Asana, check out SearchEnterpriseAI . To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech . References: Project management vendor Asana brings AI to Work Graph 6 of the top change management applications Connected workspace apps improve collaboration management…
 
As an AI writing assistant, Grammarly has used AI technology from its inception. The popularity of large language models has led to a shift in which the writing assistant vendor moved from natural language processing to including large language models to help enterprise employees improve their writing as they work. This has led Grammarly to see a possibility in the part it can play in transforming the future of work. Featuring: Luke Behnke , head of Enterprise Product at Grammarly, an AI-powered assistant writing platform. In today’s episode, we cover: Grammarly’s AI evolution Agentic AI and the future of work AI technology as an assistant and not a replacement for work and more. To learn more about AI and Grammarly, check out SearchEnterprise AI . To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech . References: Grammarly AI and an update to the writing tool What will be the future of the workplace? Top 4 AI writing tools for improved business efficiency…
 
A key truth about AI is that regulation has long lagged innovation. However, this has not removed the responsibility of enterprises to deploy AI systems responsibly or for AI vendors to create responsible systems. What are the key metrics to understanding a safe AI system? Featuring: Stuart Battersby , CTO at Chatterbox Labs, vendor of a quantitative AI risk metrics platform, and Danny Coleman , CEO at Chatterbox. In today’s episode, we cover: The difference between AI safety and responsible AI The need for standards in AI safety The future of AI safety in Enterprises and more. To learn more about responsible AI, check out SearchEnterprise AI . To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech . References: Assessing if DeepSeek is safe to use in the enterprise EU, U.S. at odds on AI safety regulations Responsible AI vs. ethical AI: What's the difference?…
 
Industrial AI is less familiar than consumer AI, but represents a critical and growing sector within AI’s influence. What unique AI applications are surfacing in this area? Featuring: Olympia Brikis , director of Industrial AI research at Siemens In today’s episode, we’ll cover… Understanding Industrial AI and its distinctions from consumer AI AI and, specifically, generative AI adoption at Siemens The role of digital twins in testing AI recommendations and more. To learn more about AI in healthcare, check out Search Enterprise AI . To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech . References: CES 2024: Siemens eyes up immersive tech, AI to enable industrial metaverse How businesses are using AI in the construction industry Siemens forges digital twin deal with Nvidia for metaverse…
 
Traditional, generative, agentic—in the past couple of decades, AI metamorphosed into an indisposable tool for enterprises wanting to streamline their processes and improve their impact. In this episode, we dive into the different types of AI, best practices for implementation, and the challenges faced in the industry. Featuring: Deepak Singh , Vice President at AWS In today’s episode, we’ll cover… The difference between traditional AI, generative AI, and agentic AI The role of agentic AI in software development Best practices for implementing agentic AI and more! To learn more about agentic AI, check out Search Enterprise AI . To watch the video version our podcast, subscribe to our YouTube channel, @EyeOnTech . References: AWS intros new foundation model line and tools for Bedrock Amazon Q, Bedrock updates make case for cloud in agentic AI Amazon to spend $100B on AWS AI infrastructure…
 
In the couple of years since the popularization of ChatGPT , generative AI technology has quickly taken hold in the legal profession. It has backfired in some cases, such as when an attorney filed a legal brief written with ChatGPT's help and the AI platform hallucinated some of the cases in the brief. That case and others have led some law firms to block general access to AI tools. Most recently, Hill Dickinson, a law firm in the U.K., asked its staff not to use generative AI tools like ChatGPT. Many law firms are using generative AI tools, and some even market their own AI systems. AI vendors are also partnering with law firms and companies in the legal profession. In February, LexisNexis and OpenAI agreed to integrate OpenAI's large language models across its products. The success, and uncertainty, surrounding AI tools in the legal profession led James M. Cooper and Kashyap Kompella to write the book A Short and Happy Guide for Artificial Intelligence for Lawyers . Cooper is a law professor at California Western School of Law, while Kompella is CEO of AI analyst firm RPA2AI Research. In the book , Cooper and Kompella explore how lawyers can understand and use AI technology. "We saw an urgent need to upskill lawyers on AI," Kompella said on the latest episode of Informa TechTarget's Targeting A I podcast. "How do you move AI ethics and responsible AI into practice? You have to move them through lawyers. Lawyers are a big part of that equation." Kompella and Cooper argue that while numerous books for lawyers about AI exist, few focus on using the technology ethically. The authors also argue that while the legal profession has traditionally been slow to adopt new technologies, it can benefit from AI for several reasons. For example, AI technology can provide access to legal services for those in underserved areas like rural communities in the United States, Cooper said. "AI can be a game changer in terms of provision of legal services," he said. However, providing more education is the key to helping legal professionals understand AI technology . "The law school curriculum is not teaching AI or any technologies to the students, so there is a huge skill gap," Kompella said. Cooper added, "The skill sets of prompt engineering, of knowing how to use these AI tools and the dangers that come with them, should be rote in law schools now right from the first year. Those law schools around the world that embrace this idea are future-proofing their students. They're not going to have to play catch up." Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series…
 
Without a good data strategy, generative AI becomes unusable technology for enterprises. This was true when ChatGPT started becoming popular, and it is even more accurate years later. The most recent example is the AI Chinese startup DeepSeek . While most AI cloud providers like Google, AWS and Microsoft now offer the DeepSeek-R1 reasoning model, many AI experts believe that enterprises might be hesitant to use it due to the data it was trained on. Despite DeepSeek's R1's innovation, it all comes down to the foundation, said Michelle Bonat , chief AI officer at AI Squared, an AI and data integration platform. "As GenAI expands and expands ... the fundamentals are the fundamentals," Bonat said on the latest episode of Informa TechTarget's Targeting AI podcast. She added that while many organizations may have started with GenAI by just putting up a chatbot, many have found that if they do not have good quality data, they might have to pause their GenAI initiatives. The reason is that the nature of generative AI systems is to produce responses. Therefore, if they do not have good-quality data , they tend to hallucinate. Thus, Bonat said the growth in GenAI initiatives across organizations has also led to an increase in conversation around data strategy, data quality and data cleanliness. "They're very much connected," she said. "GenAI has become important in the conversation that connects with data strategy, data quality, data cleanliness and also, ultimately, in responsible AI and governance within the organization." She added that enterprises should pay attention to data and responsible AI because it benefits their businesses. "It's a competitive advantage to have responsible AI," she continued. "Customers want AI systems they can trust. ... Being transparent and having responsible AI helps increase your brand reputation." Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.…
 
While some vendors are working to ensure large language models become better at reasoning, other AI vendors are making them compatible in multiple languages. Writer is a provider of a full-stack generative AI platform for enterprises. While the vendor provides a generative AI platform that enterprises can use to build generative AI capabilities into their workflows , it also offers a family of LLMs: Palmyra . The models support text generation and translation in numerous languages, including Spanish, French, Hindi and Russian. "Multilingual training data and models that can be as good in dozens of other languages as they are in English is something everybody should strive for," said Writer cofounder and CEO May Habib on a recent episode of Informa TechTarget's Targeting AI Podcast. Writer also uses large volumes of synthetic data to help build legal confidence in generative AI technology, Habib said. Writer also publishes data on how its models score for bias and toxicity. "We really want to make sure that we are compliant with folks' ESG [equity, sustainability and governance] guardrails and guidelines," Habib said. Writer recently raised $200 million in series C funding, bringing its valuation to $1.9 billion. Esther Shittu is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.…
 
The contact center world is a difficult place, packed with frustration and stress. Digital communications giant Cisco sees its mission as easing that experience for human contact center workers and the customers they deal with every day. For that undertaking, the vendor has seized on generative AI and agentic AI as the vehicles to both automate and augment the work of humans, in essence, smartening up the traditional chatbots that have long helped companies interact with their customers. "We're to see a lot more of what I call event-based communication, proactive communication outbound that we do particularly well, powered by AI," said Jay Patel, senior vice president and general manager for customer experience at Cisco Webex, on the Targeting AI podcast from Informa TechTarget. "And then the response path to that is we think there will be AI agents involved in some of the more simple use cases. "For example, if you haven't paid a bill, they can obviously call you in the outbound call center, but probably a better way of doing it is probably to send you a message with a link to then basically make the payment," Patel continued. Like many other big tech vendors, Cisco deploys large language models (LLMs) from a variety of specialist vendors, including OpenAI and Microsoft. It also uses open models from independent generative AI vendor Mistral , as well as its own AI technology developed in-house or acquired by acquisition. "Fundamentally, what we are looking at is the idea of an AI engine for each use case, and within the AI engine you would have a particular LLM," Patel said. Among the generative AI-powered tools Cisco has assembled are Webex AI Assistant and Agent Wellness, to tend to the psyches of busy contact center human workers . "Customers call very frustrated; they may shout at somebody. And then if you've had a difficult call, the agent wellness feature will mean that the supervisor knows that this set of agents has had a set of difficult calls," Patel said. "Maybe they're the ones who need a break now. So, there are ways of improving employee experience inside the contact center that we think we can … use AI for." Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.…
 
Box has been in the AI game for a long time. But when generative AI mushroomed into a transformative force in the tech world, the cloud content management vendor opted to turn to specialists in the new and fast-growing technology to power the arsenal of tools in its platform. "We've been doing AI for many years. But the really cool thing that happened … AI got to the point where the generative AI models understood content," said Ben Kus, CTO at Box, on the Targeting AI podcast from Informa TechTarget. "For us, this whole generative AI revolution has been this great gift to everybody who deals with content. It's almost like having a very dedicated, very intelligent person who stands next to you, ready to do what you want." When generative AI exploded with OpenAI's release of ChatGPT in November 2022, Box turned to OpenAI for its first batch of generative AI tools. Box CEO Aaron Levie had known OpenAI CEO and co-founder Sam Altman for many years. However, when a passel of other independent generative AI vendors sprang up and the tech giants started releasing their own powerful large language models (LLMs) and multimodal models, Box decided to broaden its generative AI palette. "Azure and OpenAI are partners of ours and we think they have great models, but we are not at all dedicated to any one model," Kus said. "In fact, at Box, one of our goals is to provide you with all of the major models that you might want." These include generative AI models from Google, IBM, Anthropic and Amazon . One example of how Box uses an outside model is Anthropic's 3.5 Sonnet LLM, which Kus called "one of the best models out there right now." One application is at a financial firm that deals with long bond offerings . The company needs to analyze many of these complex financial vehicles to evaluate which bonds in which it wants to invest. "They use [the model] to extract key info. It takes the [job] of looking through these bonds. From hours or days to … hopefully, minutes," Kus said. "If the model is very good, it can give you very good answers. If it's not as smart, then it can be off a little bit. So, this particular company really wants to have the best models so they can get the best sort of use of this kind of AI." Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.…
 
This is the year of AI agents. The last few months of 2024 brought much talk about and expectations for AI agents that can operate autonomously and semi-autonomously. Many vendors have capitalized on the enthusiasm to introduce new agentic products: Salesforce came out with Agentforce, and Microsoft introduced Copilot agents. With 2025 here, questions about whether the momentum on agents will continue. Some see the agentic hype, and real progress, persisting this year. Craig Le Clair, a Forrester Research analyst and author of the soon-to-be-published book Random Acts of Automation , is among those who think AI agents will continue to gain momentum in the new year. "It's the biggest change toward AGI [artificial general intelligence] that I've seen," Le Clair said on the latest episode of Informa TechTarget's Targeting AI podcast, referring to the concept of AI that is as smart or smarter than human intelligence. Enterprises will likely adjust the ways they use applications that use AI agents as copilots to augment humans, because many of those applications are not profitable, he said. However, AI agents will be the driving force in helping enterprises build platforms that use generative AI technology to spur business value, he said. "When you really start to turn piles of data into conversations with people ... that's the opportunity for this," Le Clair said. "For an employee to have a conversation with standard operating procedures to get advice on what to do, or for standard operating procedures to be taken out of that PDF repository and actually put into a prompt and generate tasks that are then followed by an agent to get something done -- the potential is really there." As with all new technology, AI agents involve a trust issue. Enterprises still do not trust the technology to be fully autonomous and perform tasks from start to finish all on its own, Le Clair said. However, organizations can rely on AI agents to perform part of the work with the assistance of a human in the loop. With the speed of the technology's maturation, progress toward fully autonomous agents by 2028 is likely, Le Clair predicted. Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.…
 
The AI application startup, which was founded in 2016 and was valued at more than $2.1 billion in 2021, uses a reasoning engine to help employees search for information across the enterprise. Since its inception, a key ingredient in the company's success has been AI and generative AI technology. "We were the first company after Google to deploy BERT in production," said co-founder and president Varun Singh on the latest episode of Informa TechTarget's Targeting AI podcast. BERT was Google's first model with bidirectional encoding that enabled computers to understand large text spans. It was pretrained, so Moveworks did not have to train it from the ground up. It also did not require a lot of data. After using BERT to train its automation platform, Moveworks started using GPT-2 from OpenAI in 2020. This is two years before the mass popularization of the generative AI vendor's ChatGPT chatbot, mostly to generate synthetic data. Singh added that he and his team had failed to realize right away that the model could also be used for reasoning tasks. "It's not so much a mistake that was made or not, but it was just sort of as technology evolved, the moment a paradigm shift actually comes into full focus, you look back and you're like, 'We could have done that sooner because we had access to the models, but we didn't see how powerful they could be,'" he said. Since the shift, Moveworks has evolved from a platform with a reasoning engine to a platform for building AI agents . On Oct. 1, Moveworks launched Agentic Automation as part of its Creator Studio offering. The system enables developers to build AI agents. Throughout the evolution of its business, Moveworks has differentiated itself with its use of AI technology, Singh said. "Without AI, there's nothing Moveworks has to offer to the world," he said. "There's only value from Moveworks because of AI." Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.…
 
When generative AI became the next big thing in tech, enterprise software giant Oracle bet heavily on a startup to provide it with foundation and large language models rather than scramble to develop its own. That then-fledgling company was Cohere . Founded in 2019, the generative AI vendor raised $270 million in a Series C round, and its investors included Oracle, Nvidia, Salesforce Ventures, and some private equity firms. In July, Cohere raised another $500 million and reached a market valuation of $5.5 billion . Cohere's open generative AI technology is now infused in many of Oracle's databases , a fixture among large enterprises. The tech giant has also tapped Cohere's powerful and scalable Control-R model for Oracle's popular vertical market applications, including those for finance, supply chain and human capital management. But while Oracle has put Cohere at the center of its generative AI and agentic AI strategy, the tech giant is also working closely with Meta. The social media colossus has gained a foothold in the enterprise AI market with its Llama family of open foundation models . Oracle is customizing Llama for its Oracle Cloud Infrastructure platform, along with Cohere's models. "We have made a decision to really partner deeply around the foundation models," said Greg Pavlik, executive vice president, AI and data management services at Oracle Cloud Infrastructure, on the Targeting AI podcast from TechTarget Editorial. "What we're looking for are companies that are experienced with creating high-quality generative AI models," he continued. "But more importantly … companies that are interested in enterprise and specifically business solutions." Pavlik said Oracle values the open architecture of the models from both Cohere and Meta, which makes it easier for Oracle to customize and fine-tune them for enterprise applications. "The advantage really of having a deep partnership is that we're able to sit down with the foundation model providers and look at the evolution of the models themselves, because they're not really static," he said. "A company will create a model and then they'll continually retrain it. "We see our role as to come in and proxy for the enterprise user, proxy for a number of verticals," Pavlik continued. "And then try to move the state of the art in the technology base closer and closer to the kinds of patterns and the kinds of scenarios that are important for enterprise users." Oracle also uses generative AI technology from other vendors and enables its customers to use other third-party models, he noted. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Esther Ajao is a TechTarget Editorial news writer and podcast host covering AI software and systems. Together, they host the Targeting AI podcast.…
 
At the beginning of the wave of generative AI hype, many feared that generative models would replace the jobs of creatives like artists and photographers. With generative AI models such as Dall-E and Midjourney seemingly creating unique works of art and images, some artists found themselves at a disadvantage. Some say the generative systems took their artwork, copied it and used it to produce their own images. In some cases, the generative systems allegedly outright stole the creative work . Two years later, artists have to some extent been reassured by the support of stock vendors like Getty Images. Instead of trailing behind generative AI tools such as Stable Diffusion, Getty created its own image-generating tool: Generative AI by Getty Images . Compared with other image generators, Getty has taken great lengths to restrict its model through the data set. The stock photography company maintains what it calls a clean data set. "A clean data set is really a training data set that a model is trained on that can lead to a commercially safe or responsible model," said Andrea Gagliano, senior director of AI and machine learning at Getty Images, on the latest episode of TechTarget Editorial's Targeting AI podcast. Getty's clean data set does not contain brands or intellectual property products, Gagliano said. The model's data set also does not include images of well-known people or likenesses of celebrities like Taylor Swift or presidential candidates. "We have taken the very cautious approach where our generator will not generate any known person or any celebrity," Gagliano said. "It will not generate Donald Trump ," she said, referring to the President-elect. "And it will not generate Kamala Harris," referring to the vice president and former presidential candidate. "It has never seen a picture of Donald Trump," she continued. "The model has never seen a picture of Kamala Harris." Gagliano added that removing this possibility also guards against those who want to misuse the technology to create deepfakes . Therefore, any generated output is labeled synthetic or AI-generated . "We don't want any situation where we start to undermine the value of a real image," Gagliano said. Finally, the data set that Getty uses produces images with licenses on them, ensuring that creators get compensated. Thus, a portion of every dollar made by Generative AI by Getty Images is given to the creator who contributed to the data set. "The reason for that is the more unique imagery that we bring into the training data set, the more additive it is," Gagliano said. Getty updated its generative AI tools Tuesday. The new capabilities include Product Placement, which lets users upload their own product images and generate backgrounds, and Reference Image, which enables users to upload sample images to guide the color and composition of the AI-generated output. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.…
 
President-elect Donald Trump during his election campaign offered clues about how his administration would handle the fast-growing AI sector. One thing is clear: AI, to the extent that it is regulated, is headed for deregulation. "It's likely going to mean less regulation for the AI industry," said Makenzie Holland, senior news writer at TechTarget Editorial covering tech regulation and compliance, on the Targeting AI podcast. " Being against regulation and [for] deregulation is a huge theme across his platform." Trump views rules and regulations on business as costly and burdensome, Holland noted. The former president and longtime businessman's outlook presumably includes independent AI vendors and the tech giants that also develop and sell the powerful generative AI models that have swept the tech world. President Joe Biden's wide-ranging executive order on AI has been the strongest articulation of how the federal government views AI policy. However, it's unclear which elements of the Democratic president's plan Trump will scrap and which he'll keep. Trump established the National Artificial Intelligence Initiative Office at the end of his first term as president in 2021. David Nicholson, chief technology advisor at Futurum Group, said on the podcast that Trump will likely retain some aspects of the executive order with bipartisan support. Among these is the federal government's recognition that it should guide and promote AI technology. "[Trump will] definitely not scrap it wholesale," Nicholson said. "There's something behind a lot of those concerns ... and pretty bipartisan concern that AI is a genie that we only want to let out of the bottle, if possible, very carefully." Holland, however, doesn't expect many regulatory proposals in Biden's executive order to survive the next Trump presidency. Trump is also likely to dramatically de-emphasize the AI safety concerns and regulatory proposals that feature prominently in Biden's executive order, she said. Meanwhile, concerning Elon Musk -- a major Trump backer and owner of the social media platform X, formerly Twitter, and generative AI vendor xAI -- the issue is complicated, Nicholson said. Musk has been a trenchant critic of xAI competitor OpenAI, alleging in a lawsuit that the rival vendor abandoned its commitment to openness in AI technology. However, Nicholson noted that Musk's definition of transparency in training large language models is unorthodox, insisting that models be "honest" and not contain political bias. "Having the ear of the president and the administration, I think he could be meaningful in that regard," Nicholson said. "[Musk] is going to be the loudest voice in the room when it comes to a lot of this stuff." While Trump is expected to try to reverse or ignore much of Biden's agenda, one major piece of bipartisan legislation passed during Biden's tenure, the CHIPS and Science Act of 2022, is likely to survive because it emphasizes reviving manufacturing and technology development in the U.S. , Nicholson said. But the Federal Trade Commission's and Department of Justice's active stances on AI rulemaking and big tech regulation -- the DOJ successfully sued Google for monopolizing the search engine business -- are ripe for a Trump rollback. "The FTC is likely to face a shake-up, as far as Lina Khan's job probably is on the line," Holland said, referring to the activist FTC chair, who has vigorously pursued a number of big tech vendors . "Trump's entire platform is about deregulation and being against regulation. That's automatically going to impact these enforcement agencies, which, in some capacity, can make their own rules," Holland said. In the absence of meaningful federal regulation of AI, the U.S. is moving toward a state-by-state regulatory patchwork. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series.…
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش