AI - Artificial intelligence

272 readers
1 users here now

AI related news and articles.

Rules:

founded 11 months ago
MODERATORS
51
52
53
54
55
56
57
58
59
60
 
 

Vectors are the fundamental way AI models understand and process information. Small vectors describe simple attributes, such as a point in a graph, while “high-dimensional” vectors capture complex information such as the features of an image, the meaning of a word, or the properties of a dataset. High-dimensional vectors are incredibly powerful, but they also consume vast amounts of memory, leading to bottlenecks in the key-value cache, a high-speed "digital cheat sheet" that stores frequently used information under simple labels so a computer can retrieve it instantly without having to search through a slow, massive database.

Vector quantization is a powerful, classical data compression technique that reduces the size of high-dimensional vectors...

archive.org

ghostarchive.org

61
62
63
64
65
66
67
68
 
 

Abstract:

Resistance to artificial intelligence (AI) is widespread and persists even when known psychological barriers are removed. What explains this persistent aversion? Across four studies, we investigate whether moral reactions to AI—rooted in deeply held beliefs about right and wrong—help explain resistance beyond pragmatic concerns. In Study 1, we analyzed all news headlines in a major US media corpus (COCA, 2018–2024) and found that AI is moralized at levels comparable to GMOs and vaccines—technologies whose moral opposition has received considerable attention—and that surges in moralization followed the launch of major AI applications such as ChatGPT and DALL-E. In Studies 2a, 2b, and 3, representative samples of Americans reported their attitudes toward several AI applications and other technologies. Although few participants opposed AI outright, most opponents indicated their views would remain unchanged even if AI proved beneficial—suggesting moral rather than pragmatic roots. Structural equation models revealed that moralization of AI was best captured by a single latent factor, indicating a generalized moral sentiment rather than domain-specific risk–benefit appraisals. Qualitative analyses further uncovered the most common justifications people invoke and how opponents and supporters differ in their reasoning. In Study 4, participants from Studies 2b and 3 completed a subsequent behavioral grading task; moralization scores measured in the earlier surveys predicted greater reluctance to use AI even when doing so would benefit participants (a one standard deviation increase in moralization corresponded to 42% decrease in AI usage). Together, these findings demonstrate that resistance to AI is partly moral in nature, suggesting that reaping the potential benefits of AI tools may require addressing moral concerns rather than relying solely on pragmatic arguments.

69
 
 

The general vibes I see online is that the AI companies have not been doing particularly well in the reliability department. Both OpenAI and Anthropic publish reliability statistics on their status pages. Now, I’m not a fan of using the nines as a meaningful indicator of reliability, but since I don’t have access to any other signals about reliability for these two companies, they’ll have to do for the purposes of this blog post.

70
71
72
73
74
75
view more: ‹ prev next ›