Category: Apps

  • Threads to begin ad Testing in 2025, Meta eyes revenue from rapid user base

    Threads to begin ad Testing in 2025, Meta eyes revenue from rapid user base

    Meta is preparing to run commercials on Threads in early 2025, marking a significant shift from its initial strategy of refraining from monetisation in the platform’s early stages. 

    The app boasts explosive growth to reach 200 million monthly users as of August 2024, surpassing the app’s expectations. Meta is now reconsidering its stance and focusing on turning its rapidly expanding audience into a revenue stream.

    Read also: Meta’s Llama faces scrutiny as OSI defines ‘Open’ AI standards

    Rapid growth sparks monetisation shift

    Celebrating a milestone, CEO Mark Zuckerberg mentioned Threads as the next possible billion-user service for the firm. Ads are considered a means for Meta to profit from this increase sooner than originally intended. 

    Executives had said monetisation would be a multi-year endeavour, but the choice to fast track the process showed Meta’s ambition to start making money from the platform.

    During the recent earnings call, Susan Li (CFO) scaled back expectations that Threads may not generate significant revenue in 2025.

    “We don’t expect Threads to be a meaningful driver of 2025 revenue at this time. We’ve been pleased with the growth trajectory, and again, we’re really focused on introducing features that the community finds valuable and working to deepen growth and engagement,” Li remarked.

    Ads in development, but not yet active

    Hints of advertising features surfaced in Threads’ code in August 2024, with a “Sponsored” badge appearing. However, Alec Booker, Instagram spokesperson affirmed that Meta was not actively testing ads: “We’re not testing ads in Threads at this time, and there is no immediate timeline for monetisation.”

    Some marketers reported seeing ad spots labelled as “Threads” in September; suggesting that the parent company was laying the groundwork for ads but Meta did not officially acknowledge any ad rollout or change.

    Read also: Meta updates facial recognition to stop celebrity-bait scams on Facebook, Instagram

    With active users surpassing 200 million in May 2024—a 25 million increase from the month before—Threads has seen consistent user expansion. Meta CEO Mark Zuckerberg acknowledged the Thread’s potential as the next billion-user offering, but noted that revenue generation will take time.

    As of October 2024, Threads has grown to 275 million monthly active users, maintaining a consistent increase in user base. Since Meta’s main income comes from advertising, it is predicted that advertisements will finally play a key role in the site’s monetising plan.

    Recent reports indicate that Threads is likely to let a small group of marketers produce and distribute commercials beginning in January 2025. A team from Instagram’s advertising business will spearhead the project. Meta also intends to bring new tools to improve Threads’ user interface and interaction.

  • Snapchat allows parents to track teens with new location-sharing feature

    Snapchat allows parents to track teens with new location-sharing feature

    Snapchat has announced a new feature that allows parents to request their teen’s real-time location. On Thursday, the video-sharing app  introduced Snap Map, a location-sharing feature within its Family Center, an in-app hub offering parental tools and resources.

    Snapchat enhancing safety for families

    The new location-sharing capability is designed to empower parents, allowing them to monitor their children’s whereabouts when necessary. Parents can send a request to their teens, who can choose to share their location for a specified period. This initiative reflects Snapchat’s commitment to enhancing user safety while balancing the need for privacy among younger users.

    Read also: Snapchat plans to introduce watermarks to AI-Generated image

    This feature is particularly relevant as concerns about teen safety continue to rise, especially regarding issues like bullying, abduction, and other risks associated with unsupervised outings.

    According to Snap, these location features are accessible only by opting in and are limited to people already added as friends. The platform will also add reminders for users who share their location with all friends on their list, encouraging them to review their settings regularly.

    Contextual relevance in Africa

    In Africa, where mobile connectivity is rapidly expanding, features like real-time location sharing can play a crucial role in enhancing family safety. As urbanisation increases and more young people engage in social activities outside the home, tools that facilitate communication and safety become increasingly relevant. Countries like South Africa and Nigeria are witnessing a surge in smartphone usage among youth, making such features particularly impactful.

    Moreover, as digital literacy improves across the continent, parents are becoming more proactive in leveraging technology to ensure their children’s safety. Snapchat’s new feature aligns with this trend, providing families with innovative solutions tailored to contemporary challenges.

    Read also: Snapchat to reduce workforce by 10%

    Concerns about safety with location sharing

    While this feature aims to enhance family safety, it also raises concerns about potential misuse by stalkers or individuals who may exploit access to real-time locations. Parents must remain vigilant about how they discuss and implement these tools with their teens to ensure that safety measures do not inadvertently compromise their children’s security or privacy.

    Snapchat’s introduction of a real-time location-sharing feature represents a significant step towards enhancing family safety while navigating the complexities of modern parenting.

    By allowing parents to request their teen’s location, Snapchat aims to foster a safer environment for young users without compromising their autonomy.

    As this feature rolls out, it will be essential for families to engage in open dialogues about privacy and trust, ensuring that technology serves as a tool for connection rather than control.

  • TikTok cracks down on Kenyan content, removes 360,000 videos

    TikTok cracks down on Kenyan content, removes 360,000 videos

    TikTok, a short-form video broadcasting platform, has announced that it has banned over 360,000 videos in Kenya over non-compliance with its community guidelines.

    TikTok’s swift action through automated systems

    TikTok’s quarterly Community Guidelines Enforcement Report, released Wednesday, showed 60,465 global account bans. The platform continues to enforce age restrictions, as 57,262 accounts were reported as potential under-13 accounts.

    Read also: Top countries with highest TikTok users in Africa

    As a measure of TikTok’s proactive approach to filtering offensive or dangerous content before it reaches viewers, the videos removed in Kenya account for 0.3 percent of all content uploaded during the reporting period. 95 percent of the infringing films were taken down within 24 hours of being flagged, and 99.1 percent were taken down before anyone even reported them.

    This fantastic statistic demonstrates how TikTok is improving at identifying and removing harmful content using automated techniques. 

    In June 2024, TikTok erased 178 million videos worldwide. Automatically, 144 million were eliminated. Thanks to technology, the platform can detect risks more quickly and precisely, shielding human moderators from potentially dangerous content.

    Addressing government concerns: TikTok’s commitment to online safety in Kenya

    The upgrade follows allegations by the Kenyan government that TikTok promotes the dissemination of false information, permits fraud, and distributes explicit sexual content.

    Fortune Mgwili-Sibanda, TikTok’s director of government and public policy for Sub-Saharan Africa, who spoke before parliament earlier this year, said the platform’s user rules and features are intended to promote a safe and positive community.

    Read also: TikTok fires intern for AI sabotage

    “TikTok is an entertaining and joyful place because of the work we put into keeping the platform safe, including investing more than $2bn in our Trust and Safety efforts this year, globally,” Mgwili-Sibanda told parliament.

    In response to concerns raised by the Kenyan government, Mgwili-Sibanda said TikTok will continue to hold capacity-building workshops for policymakers and regulatory agencies on online safety, data privacy, and content moderation.

    ‘’We value the opportunity to contribute to keeping our Kenyan community safe on our platform and look forward to collaborating more closely with all our stakeholders in Kenya, including government, the media, civil society, parents, teachers, guardians and our wider community itself,’’ he said.

  • X tests free version of Grok AI Chatbot for users, aims to rival ChatGPT 

    X tests free version of Grok AI Chatbot for users, aims to rival ChatGPT 

    Social network X, owned by Elon Musk, has shifted its strategy by testing a free version of its AI chatbot, Grok. The development was announced on Sunday. 

    Grok, which was only exclusive to premium users, is now available to a select group of free users in certain regions, sparking curiosity and excitement in the tech community. The decision to open Grok to more users is a significant step for X as it continues to build its artificial intelligence capabilities in competition with other major players in the AI space.

    Read also: Tech-X-Con 2024: Dev League’s premiere event to inspire African tech talents in Kwara

    Testing Grok access for free users in New Zealand

    The free version of Grok is currently being tested in select regions, with New Zealand being the first to experience the new feature. Several users and app researchers, including one identified as Swak on X, confirmed that the feature is available but has usage limitations. Users on the accessible version of Grok can make up to 10 queries every two hours with the Grok-2 model, 20 queries every two hours using the Grok-2 mini model and three image analysis requests per day. These restrictions are likely in place to manage load and ensure the system can handle the increased traffic from free users.

    Eligibility requirements for free access to Grok

    To access Grok for free, users must meet specific criteria. The account must be at least seven days old, and users must have linked a valid phone number to their profile. These conditions help ensure that the users accessing the free version are legitimate, not spam accounts, which is standard practice to prevent abuse and ensure a quality user experience.

    Grok-2, launched in August, brought impressive new features to the chatbot, including image generation powered by Black Forest Labs’ FLUX.1 model. The platform continued to evolve in late September, allowing Grok to understand images. These advanced capabilities were previously available only to premium and Premium+ users, distinguishing the paid subscriptions apart from free users. 

    With the introduction of the free version, X is expanding access to these cutting-edge features, potentially enabling a broader audience to experiment and interact with Grok’s full range of abilities.

    X’s strategy to compete with other AI models

    By opening Grok to free users, X aims to build a more extensive user base and gather feedback quickly, critical for the company’s ability to compete in the competitive AI landscape. 

    AI chatbots like OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini have already captured significant market share, making it essential for X to accelerate its development and feedback cycle to refine Grok and improve its capabilities. 

    The decision to expand access to Grok suggests that X is taking a more aggressive approach to solidifying its position in the AI industry.

    Read also: Kenya suspends Telegram during KCSE exam hours to curb malpractice

    Plans for Grok and X’s AI ambitions

    This move follows a report from Wall Street Journal in late October, which revealed that XAI, Elon Musk’s AI development company, is looking to raise billions of dollars at a $40 billion valuation. This funding round would likely support X’s broader ambitions in artificial intelligence, allowing the platform to enhance Grok and other AI projects within its ecosystem.

    By opening Grok to a broader audience, X is signalling its intention to remain competitive with established AI providers while fostering user engagement and refining its technology through real-world use. As more users access the free version, X may continue to test and refine Grok’s features, eventually offering more advanced tools as part of its premium offerings.

    With Grok now available to free users in select regions, X is taking a bold step in its bid to expand its AI capabilities and compete with industry leaders. This move also signals a shift toward gathering user feedback and refining its product to better suit the needs of a larger and more diverse audience. As X continues to roll out Grok’s features, it is clear that AI will play a central role in its ongoing strategy.

  • TikTok fires intern for AI sabotage

    TikTok fires intern for AI sabotage

    ByteDance, the parent company of TikTok, on Saturday fired an intern for “maliciously interfering” with AI model training, according to a statement issued by the Chinese app.

    The company dismissed rumours regarding the degree of damage the unidentified person caused, stating that the social media reports contained “some exaggerations and inaccuracies.”

    ByteDance issued a statement regarding the intern’s dismissal on Toutiao on Saturday, clarifying that the company was only exposed to minimal harm.

    The Doubao ChatGPT-like generative artificial intelligence model developed by the Chinese technology giant is the country’s most famous AI chatbot.

    Read also: Facebook aims to retain youth with enhanced social experience

    Incident Overview

    Toutiao, also known as Jinri Toutiao, is a platform that provides Chinese news and information. It is a core product of the firm ByteDance, which has its headquarters in China.  

    “The individual was an intern with the [advertising] technology team and has no experience with the AI Lab,” ByteDance posted on Toutiao. “Their social media profile and some media reports contain inaccuracies.”

    According to the corporation, the intern’s actions did not impact the company’s commercial online operations, which included its massive language artificial intelligence models.

    ByteDance denied rumours that the breach damaged an AI training system of thousands of powerful graphics processing units (GPUs), which cost more than $10 million.

    ByteDance said it fired the person in August and had since notified the intern’s institution and relevant industry associations.

    Read also: Top countries with highest TikTok users in Africa

    ByteDance runs TikTok and its Chinese equivalent, Douyin, two of the world’s most popular social media apps.

    As with many of its competitors in China and other nations, the social media giant was investing heavily in artificial intelligence.

    In addition to powering its Doubao chatbot, the company also employs the technology to power a wide variety of other apps, one of which is a text-to-video tool known as Jimeng.

    ByteDance has mitigated immediate worries about this incident, but it underscores IT sector issues with employee control and project security. In IT workplaces, cultivating a culture of responsibility and attention can help sustain trust and integrity as AI technology becomes more integrated into company operations.

  • Facebook aims to retain youth with enhanced social experience

    Facebook aims to retain youth with enhanced social experience

    Facebook, a social media pioneer, is preferred by older generations, while younger users choose Instagram and TikTok for photo and video sharing. Meta, the parent company of Facebook, wants to change this idea.

    Meta’s head of Facebook, Tom Alison, said the platform’s original goal was to help people connect with family and friends. However, its future goal is to help people make new connections and grow their networks, which is more in line with how younger people now use the internet.

    Alison said in an interview, “We see young adults go on Facebook when they are going through a change in their lives.” Marketplace helps them set up their new homes when they move to a new city. They’ll join parenting groups when they have offspring.

    Read also: Nigeria generates #2.55 trillion in taxes from Google, Facebook, Netflix, others in 6 months

    New Tabs Coming to Facebook: Local and Explore

    Facebook showed off two new tabs at the event. They are called Local and Explore and are being tested in some towns and markets. The Local tab collects information about events, neighbourhood groups, and items for sale in the area. The Explore tab, on the other hand, suggests content based on the user’s interests.

    Facebook needs to attract young people because it has a lot of competition. One hundred fifty million people in the U.S. use TikTok, a famous app for short videos, mostly young people (Gen Z). Because of this, Meta made Reels in 2021 to fight.

    Facebook said that young people spend 60% of their time watching videos on the site, and more than half use Reels daily. A new video tab that combines short-form, live, and longer films will also be available in the next few weeks.

    Read also: Vodacom South Africa launches 4G smartphone at lower cost to replace 2G and 3G

    According to the company, Facebook’s dating feature has grown significantly since it launched in 2019. The feature lets users look at and connect with suggested profiles. The number of conversations started on the app has increased 24% year over year among young adults in the U.S. and Canada.

    As Facebook navigates a competitive landscape dominated by platforms like TikTok and Instagram, it’s clear that the social media giant is determined to reclaim its relevance among younger generations.

    By introducing new features like Local and Explore, enhancing its video capabilities, and prioritizing personalized experiences, Facebook aims to create a more engaging and dynamic platform that resonates with today’s youth’s evolving needs and preferences.

    Whether these efforts will be enough to reverse the trend remains to be seen, but Facebook’s renewed focus on youth engagement is a significant step in the right direction.

  • X’s controversial update: Blocked users can now see your public posts

    X’s controversial update: Blocked users can now see your public posts

    Elon Musk’s social media platform, X (formerly Twitter), is set to implement a controversial change that will allow users you’ve blocked to see your public posts.

    This update, announced on September 23, 2024, has sparked significant debate regarding user control and privacy on the platform.

    Read also: Top countries with highest TikTok users in Africa

    New visibility for blocked X users

    Under the new policy, blocked users can still not interact with your posts, but they can now see your public content. Musk mentioned that while the block function will stop the blocked account from engaging with you, it won’t prevent them from viewing your public posts.

    In the past, if a user tried to check the profile of someone who had blocked them, they would receive a notification stating they were blocked, along with limitations on viewing replies, media, and follower lists.

    Musk and his team justify this change as a way to improve transparency on the platform. A source from X pointed out that individuals could view posts from blocked accounts by logging out or using different accounts. Thus, the update seeks to align the blocking feature with how users behave on social media.

    Public reactions and concerns

    The announcement has sparked mixed reactions. Supporters believe this change could stop users from bypassing blocks with alternative accounts and encourage accountability. On the other hand, critics worry that allowing blocked users to see posts might lead to harassment or stalking, especially for those who rely on blocking to shield themselves from unwanted interactions.

    Musk’s long-standing criticism of the block feature is well-known; he has previously called it nonsensical and suggested replacing it with a more effective mute option. He has even hinted at removing the block feature entirely in favour of direct message controls.

    Read also: Telegram bows to pressure and agrees to release users’ data to authorities

    Implications for X users

    As X continues to change under Musk’s leadership, this latest update raises significant questions about user experience and safety.

    While it aims to foster a more open communication platform, it also challenges conventional ideas of privacy and control over personal content. Users may need to rethink their approaches to managing interactions on X as these changes take effect.

  • Telegram bows to pressure and agrees to release users’ data to authorities

    Telegram bows to pressure and agrees to release users’ data to authorities

    According to Pavel Durov, the founder of Telegram, the well-known messaging app, has revised its terms of service to deter “bad actors” from “jeopardising the integrity” of the platform.

    According to a statement by Durov on Monday, Telegram will now provide the IP addresses and phone numbers of users who break the app’s rules to the appropriate authorities “in response to valid legal requests. ”

    Read also: Telegram’s latest update empowers users to combat explicit content in private DMs

    Users data requests to be subjected to legal analysis before release 

    Before now, Telegram had only agreed to reveal users’ IP addresses and phone numbers if it obtained a court order indicating the user was considered a terrorist. The transparency report from Telegram claims such has never occurred before.

    The new policy applies to those accused of breaking any of Telegram’s regulations. The business said it would “conduct a legal analysis” of the authorities’ request before revealing user data. Durov claims that these actions will be “consistent across the world.”

    The amended guidelines specifically target users who use Telegram’s search function as a front to sell illicit goods.

    Durov underlined, “Telegram search should be used for news and friend discovery, not for advertising illicit goods.”

    Telegram moderators have employed artificial intelligence in recent weeks to make the platform’s search function “much safer. ” According to Durov, information detected as harmful has been made unavailable.

    Read also: Telegram claims ‘accusations against Pavel Durov are absurd’

    Panel Durov’s arrest and Telegram’s new policy: the connection 

    The CEO of Telegram, who was born in Russia, is currently under formal investigation in France for several crimes, including helping to distribute child pornography and sell drugs on the site. This news coincides with the announcement of Telegram’s new privacy policy.

    Security experts and law enforcement have compiled a substantial body of evidence about illicit activities on Telegram, including recruiting and group organising inside extremist organisations.

    Durov stated on his Telegram channel that he wanted to make his app “safer and stronger” after the probe.

    Durov says, “Telegram’s sudden surge in user base to 950M created growing pains that made it easier for criminals to abuse our platform. For this reason, I’ve set myself to ensure we improve in this area.”

  • Instagram introduces new teen accounts to improve safety for young users

    Instagram introduces new teen accounts to improve safety for young users

    Instagram has launched a new Teen Accounts feature to improve safety and privacy for users under 18. This initiative responds to growing concerns about social media’s impact on young people’s mental health. 

    Instagram CEO Adam Mosseri announced the changes, which will automatically convert existing accounts for users under 18 into Teen Accounts within 60 days, ensuring that new users fall into this category from the start.

    Read also: Instagram launches Creator Lab to empower Gen Z users

    Instagram Teen Accounts: Key Features

    The Teen Accounts feature includes several built-in protections designed to create a safer online environment for younger users. All accounts under 16 will default to private, allowing only approved followers to view their content. Additionally, teens can only message individuals they follow, limiting interactions with strangers. Parents will also gain access to tools that allow them to monitor their teens’s messaging activity and the topics they engage with on the platform.

    Another significant aspect of Teen Accounts is introducing a “sleep mode,” which restricts access to Instagram between 10 p.m. and 7 a.m. This feature aims to encourage healthier sleep habits among teens by reducing screen time during late hours. Furthermore, the app will implement stricter content settings, limiting exposure to potentially inappropriate material in areas like Reels and Explore.

    Parental control and oversight

    In response to parent concerns about their children’s online safety, Instagram has developed these new features. Instagram will allow parents to view some of the accounts their adolescent child has successfully communicated with through messaging and customise several settings to their preferred modes. For instance, parents must consent to allow their children to move from one private account to another.

    In a recent interview, Antigone Davis, Meta’s head of safety, highlighted such changes, saying they consolidate information on how parents could regulate what their children interact with on Instagram. Meta is also working on ways to accurately determine users’ ages, as those who provide the wrong ages are put in protective categories.

    Read also: How to edit your DMs on X: A step-by-step guide

    Meta addressing mental health concerns

    Teen accounts are being launched at a time when there is growing concern about the impact of social media on teens’ mental health. People who spend more time interacting with others on social media platforms have been associated with other problems, such as anxiety, depression, and body image among teenagers. Given such evidence, several commentators have suggested that there needs to be higher standards for how platforms interact with young people.

    Legal dealings have been made against Meta concerning the company’s operations and practices on social media regarding child protection. In this regard, the company has taken a step in increasing safety measures for younger users. The launch of Teen Accounts is one of Meta’s most extensive attempts at implementing a better world on the internet for teenagers and enabling them to interact with friends and pursue their passions.

    With time, Instagram is transforming its platform, and the development of Teen Accounts is going ahead to ensure the safety of young users. This will make parents happy, knowing that their children are safe online.

  • YouTube bolsters deepfake detection tools

    YouTube bolsters deepfake detection tools

    As technology continues to evolve, so does how people can manipulate media. Deepfakes, particularly those altering faces and voices, have become a significant concern due to their potential for misinformation.

    In response, YouTube is rolling out tools designed to detect these advanced fabrications and protect creators and viewers from the harmful effects of such manipulations.

    Read also: X offers new TV app for seamless video watching

    Understanding deepfakes

    Deepfakes use artificial intelligence (AI) and machine learning (ML) to create realistic images, videos, or audio that appear authentic but are entirely fabricated. While some deepfakes are harmless or used for entertainment, others can be misleading, damaging reputations, or spreading false information.

    YouTube’s fight against deepfakes

    To counter this, YouTube is already preparing itself for the incrementing problem by implementing new artificial intelligence, which means recognising face and voice deepfakes.

    Uploaded videos will be analysed for manipulations to identify videos which have incorporated synthetic media into them. The goal here is to give creators and viewers more choices and options and vouch for the content posted on the platform. 

    The tools will employ complex AI computing methods to read the videos and the companion audio streaming, detecting synonyms between the actors’ facial expressions, lip movements, and vocalisations.

    Read also: What Can Players Expect from Romance and Intrigue in Dragon Age?

    If the system identifies a deep fake, YouTube will inform the video’s owner and block the ad revenues that the video can generate. This means this technology should be strong enough to counter the constant upgradation of the sophistication of deep fake videos. 

    For the creators, such tools could act as protection, guaranteeing that one’s image and work would not be used in creating dangerous deepfakes. Conversely, the audience will gain the advantage of being confident that the videos they watch are not fake news. 

    As much as the detection tools are prestigious achievements for YouTube platforms, the struggle against fake media isn’t over. As deepfakes become more advanced, it is essential to note that ongoing work, constant innovation, and improvement of the detection methods will be critical. 

    YouTube’s efforts to tackle the challenge of deepfakes reflect a broader trend in tech companies taking responsibility for the integrity of the content on their platforms. By introducing these tools, YouTube is leading the charge in ensuring a safer and more authentic online experience.