• 0 Posts
  • 113 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle

  • looks dubious

    The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

    Maybe they can make it clear that this is a best-effort system, and that they only will flag some of them.

    There are a limited number of ways that I’m aware of to detect whether an image is edited.

    • If the image has been previously compressed via lossy compression, there are ways to modify the image to make the difference in artifacts in different points of the image more visible, or – I’m sure – statistically look for such artifacts.

    • If an image has been previously indexed by something like Google Images and Google has an index sufficient to permit Google to do fuzzy search for portions of the image, then they can identify an edited image because they can find the original.

    • It’s possible to try to identify light sources based on shading and specular in an image, and try to find points of the image that don’t match. There are complexities to this; for example, a surface might simply be shaded in such a way that it looks like light is shining on it, like if you have a realistic poster on a wall. For generation rather than photomanipulation, better generative AI systems will also probably tend to make this go away as they improve; it’s a flaw in the image.

    But none of these is a surefire mechanism.

    For AI-generated images, my guess is that there are some other routes.

    • Some images are going to have metadata attached. That’s trivial to strip, so not very good if someone is actually trying to fool people.

    • Maybe some generative AIs will try doing digital watermarks. I’m not very bullish on this approach. It’s a little harder to remove, but invariably, any kind of lossy compression is at odds with watermarks that aren’t very visible. As lossy compression gets better, it either automatically tends to strip watermarks – because lossy compression tries to remove data that doesn’t noticeably alter an image, and watermarks rely on hiding data there – or watermarks have to visibly alter the image. And that’s before people actively developing tools to strip them. And you’re never gonna get all the generative AIs out there adding digital watermarks.

    • I don’t know what the right terminology is, but my guess is that latent diffusion models try to approach a minimum error for some model during the iteration process. If you have a copy of the model used to generate the image, you can probably measure the error from what the model would predict – basically, how much one iteration would change an image or part of it. I’d guess that that only works well if you have a copy of the model in question or a model similar to it.

    I don’t think that any of those are likely surefire mechanisms either.








  • We’ve genetically engineered other colored foods before, like golden rice.

    We’ve genetically-engineered many bioluminescent plants and animals.

    kagis

    We’ve genetically-engineered blue flowers:

    https://www.science.org/content/article/scientists-genetically-engineer-world-s-first-blue-chrysanthemum

    We all think we’ve seen blue flowers before. And in some cases, it’s true. But according to the Royal Horticultural Society’s color scale—the gold standard for flowers—most “blues” are really violet or purple. Florists and gardeners are forever on the lookout for new colors and varieties of plants, however, but making popular ornamental and cut flowers, like roses, vibrant blue has proved quite difficult. “We’ve all been trying to do this for a long time and it’s never worked perfectly,” says Thomas Colquhoun, a plant biotechnologist at the University of Florida in Gainesville who was not involved with the work.

    True blue requires complex chemistry. Anthocyanins—pigment molecules in the petals, stem, and fruit—consist of rings that cause a flower to turn red, purple, or blue, depending on what sugars or other groups of atoms are attached. Conditions inside the plant cell also matter. So just transplanting an anthocyanin from a blue flower like a delphinium didn’t really work.

    Naonobu Noda, a plant biologist at the National Agriculture and Food Research Organization in Tsukuba, Japan, tackled this problem by first putting a gene from a bluish flower called the Canterbury bell into a chrysanthemum. The gene’s protein modified the chrysanthemum’s anthocyanin to make the bloom appear purple instead of reddish. To get closer to blue, Noda and his colleagues then added a second gene, this one from the blue-flowering butterfly pea. This gene’s protein adds a sugar molecule to the anthocyanin. The scientists thought they would need to add a third gene, but the chrysanthemum flowers were blue with just the two genes, they report today in Science Advances.

    “That allowed them to get the best blue they could obtain,” says Neil Anderson, a horticultural scientist at the University of Minnesota in St. Paul who was not involved with the work.

    Chemical analyses showed that the blue color came about in just two steps because the chrysanthemums already had a colorless component that interacted with the modified anthocyanin to create the blue color. “It was a stroke of luck,” Colquhoun says. Until now, researchers had thought it would take many more genes to make a flower blue, Nakayama adds.

    The next step for Noda and his colleagues is to make blue chrysanthemums that can’t reproduce and spread into the environment, making it possible to commercialize the transgenic flower. But that approach could spell trouble in some parts of the world. “As long as GMO [genetically modified organism] continues to be a problem in Europe, blue [flowers] face a difficult economic future,” predicts Ronald Koes, a plant molecular biologist at the University of Amsterdam who was not involved with the work. But others think this new blue flower will prevail. “It’s certainly an advance for the retail florist,” Anderson says. “It would have a lot of market value worldwide.”

    I imagine that it’s quite possibly within the realm of what we could do.




  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldApp Server for phone apps
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 days ago

    If you want to get deals for the grocery store you need their app

    That’s because they want to get their app on your phone so that they can perform data-mining using the data that the app can get from the phone environment.

    I mean, I don’t think that it’s worth bothering with trying to game the system. I’m not going to give them my data, and I don’t really care about the discount that they’re offering for it. But if you want to do so, you can probably run an Android environment on a server and use the equivalent of RDP or VNC or something to reach it remotely.

    grabs a random example

    https://waydro.id/

    A container-based approach to boot a full Android system on regular GNU/Linux systems running Wayland based desktop environments.

    Need to connect that up to VNC or RDP somehow if it doesn’t have native support.

    EDIT: I think that I’d take a hard look at how much it’s likely to save you relative to how much time and effort you’re going to spend on setting up and maintaining this, though.


  • For me, video is rarely the form that I want to consume any content in. It’s also very obnoxious if I’m on a slow data link (e.g. on a slower or saturated cell phone link).

    However, sometimes it’s the only form that something is available in. For major news items, you can usually get a text-form article, but that isn’t all content. I submitted a link to a YouTube video of a Michael Kofman interview the other day talking about military aid to a Ukraine community. I also typed up a transcript, but it was something like an hour and a half, and I don’t know if that’s a reasonable bar to expect people to meet.

    I think that some of this isn’t that people actually want video, but that YouTube has an easy way to monetize video for content creators. I don’t think that there’s actually a good equivalent for independent creators of text, sadly-enough.

    And there are a few times that I do want video.

    And there may be some other people that prefer video.

    Video doesn’t actually hurt me much at this point, but it would kind of be nice to have a way to filter it out for people who don’t want it. Moving all video to another community seems like overkill, though. Think it might be better to have some mechanism added to Threadiverse clients to permit content filtering rules; I think that probably a better way to meet everyone’s wants. It’d also be nice if there were some way to clearly indicate that a link is video content, so that I can tell prior to clicking on it.



  • You can still get a few phones with built-in headphones jacks. They tend to be lower-end and small.

    I was just looking at phones with very long battery life yesterday, and I noticed that the phone currently at the top of the list I was looking at, a high-end, large, gaming phone, also had a headphones jack. The article also commented on how unusual that was.

    Think it was an Asus ROG something-or-other.

    kagis

    https://rog.asus.com/us/phones/rog-phone-8-pro/

    An Asus ROG Phone 8 Pro.

    That’s new and current. Midrange-and-up phones with audio jacks aren’t common, but they are out there.

    Honestly, I’d just get a USB C audio interface with pass-through PD so that you can still charge with it plugged in and just leave that plugged into your headphones if you want to use 1/8th inch headphones. It’s slightly more to carry around, but not that much more.

    Plus, the last smartphone I had with a built-in audio DAC would spill noise into the headphones output when charging. Very annoying. Needed better power circuitry. I don’t know if any given USB C audio interface avoids the issue, but if it’s built into the phone, there’s a limited amount you can do about it. If it’s external, you can swap it, and there’s the hope that their less-limited space constraints meant that they put in better power supply circuitry.





  • Words per minute meaning literally words or characters?

    Words. Well, IIRC in tests it’s something like an abstract word of fixed length, something like 5 characters or something, as that’s the average word length in English. Like, it doesn’t mean you’re typing “antidisestablishmentarianism” over and over, one word each time.

    kagis

    Yeah:

    https://en.wikipedia.org/wiki/Words_per_minute

    Since words vary in length, for the purpose of measurement of text entry the definition of each “word” is often standardized to be five characters or keystrokes long in English,[1] including spaces and punctuation. For example, under such a method applied to plain English text the phrase “I run” counts as one word, but “rhinoceros” and “let’s talk” would both count as two.

    Karat et al. found in one study of average computer users in 1999 that the average rate for transcription was 32.5 words per minute, and 19.0 words per minute for composition.[2] In the same study, when the group was divided into “fast”, “moderate”, and “slow” groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm, respectively.

    With the onset of the era of desktop computers and smartphones, fast typing skills became much more widespread. As of 2019, the average typing speed on a mobile phone was 36.2 wpm with 2.3% uncorrected errors—there were significant correlations with age, level of English proficiency, and number of fingers used to type.[3] Some typists have sustained speeds over 200 wpm for a 15-second typing test with simple English words.[4]

    Typically, professional typists type at speeds of 43 to 80 wpm, while some positions can require 80 to 95 (usually the minimum required for dispatch positions and other time-sensitive typing jobs), and some advanced typists work at speeds above 120 wpm.[5] Two-finger typists, sometimes also referred to as “hunt and peck” typists, commonly reach sustained speeds of about 37 wpm for memorized text and 27 wpm when copying text, but in bursts may be able to reach much higher speeds.[6] From the 1920s through the 1970s, typing speed (along with shorthand speed) was an important secretarial qualification, and typing contests were popular and often publicized by typewriter companies as promotional tools.

    Stenotype

    Stenotype keyboards enable the trained user to input text as fast as 360 wpm at very high accuracy for an extended period, which is sufficient for real-time activities such as court reporting or closed captioning. While training dropout rates are very high — in some cases only 10% or even fewer graduate — stenotype students are usually able to reach speeds of 100–120 wpm within six months, which is faster than most alphanumeric typists. Guinness World Records gives 360 wpm with 97.23% accuracy as the highest achieved speed using a stenotype.[7]

    So it’s not a typo or whatever, if that’s what you mean.

    Because 3 - 4 words per second seems a bit much to me and whoever talks that fast?

    It’s pretty fast, but then you’re talking about a professional text-entry person using the fastest plain-text entry mechanism we know about in a speed test. I’m sure that that’s not something demanded of a stenotypist in a normal real-time transcription session.

    My guess is that you probably could still make practical use of it if you didn’t need real-time transcription by doing a recording and then playing back with software that can do time stretching to accelerate the rate of playback; you could transcribe more-quickly.

    'course, automated transcription’s getting better too, and that might also be an answer on that front.


  • I also have the back propped up like you mentioned with the built in lifts

    Ah hah!

    Yeah, there are some ergo keyboards that have that “reverse tilt” built in. They’re aimed more at being easier on the wrist than at trying to permit for long nails, but they do exist.

    e.g.:

    https://matias.ca/ergopro/pc/

    I also have carpel tunnel

    That’d be an argument for a keyboard, like, a mechanical one where you don’t bottom out the keys on press, and then training yourself to not bottom them out, which is a big argument mechanical keyboard fans have for theirs versus rubber dome keyboards. And you need a fair bit of key travel for that, yeah. Hmm.