Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Libcamera v0.0.1 (libcamera.org)
215 points by hasheddan on Oct 18, 2022 | hide | past | favorite | 78 comments


Few people asked "why duplicate v4l2 functionality", but really libcamera serves a different purpose. V4l2 (and v4l) are frameworks that support connecting multiple types of devices. IMO the biggest advantage of libcamera is their goal to abstract various camera controls (ISP profiles and on-camera registers) that are painful to use with v4l2. It complements v4l2, not replaces it

For example v4l2 has an ioctl to send raw read/write commands to a camera module. Having a datasheet one can then check which registers do what and one can set various features, but this requires using different registers and values for different camera modules.

Then we have ISP controls that v4l2 doesn't touch at all and to let's say change color correction, or automatic gain or autofocus one has to use (usually proprietary) tools from the Soc manufacturer. Some of those tools are open source and very powerful (on rockchip for example), but still developing apps one has to use a completely different method to adjust a rockchip's ISP vs let's say raspberry pi's.

So libcamera is a great idea to abstract all those hardware interfaces. Last time I checked they supported raspberry pi, and had a very minimal rockchip support.

Imagine there is a time when libcamera supports all kinds of Soc ISPs (rockchip, broadcom, allwinner, hisilicon and many others). Traditionally getting a directly connected camera to work (under mainline Linux) has been quite difficult on various embedded devices. Then getting hardware encoding to work is the second hurdle. Most often manufacturers distribute sdk's with their proprietary binary blobs built against ancient Linux kernels and one has to live with it if one wants to reuse various cheap Ip camera boards.


> For example v4l2 has an ioctl to send raw read/write commands to a camera module. Having a datasheet one can then check which registers do what and one can set various features, but this requires using different registers and values for different camera modules.

That touches a nerve. I worked on an embedded system which used one of these camera modules and the datasheet was probably accurate at some point in time during the development of the module, but clearly as the module was developed the data sheet did not keep up. Eventually it was necessary to employ a bus sniffer to capture all traffic between a demonstration kit and module and reproduce the streams byte for byte to get the camera to do what was needed. It was clear from the streams that the demonstration code was using device registers not described in the data sheet.

I suppose that a big customer for the module (such as a cell phone manufacturer) would get access to an engineer who could provide the necessary information, but that was not us.

> ... has been quite difficult on various embedded devices.

Agreed.


A blog post about my work on libcamera I do for the Purism Librem 5 phone touches on this topic:

https://puri.sm/posts/cameras-its-complicated/


Some of the comments here about v4l2 and cameras make me feel so old. :)

When v4l2 came out, the main target were TV cards (analog TV cards, digital / satellite / cable TV was just starting for most western countries) and video capture cards. We did have webcams, but they were rare, basic, and only worked properly in movies. The top resolution was 640×480. I'd know, I had one of the best models back then, Creative Go Plus!

I've supported v4l2 across a few flamewars. In Windows back then you needed a closed source driver with a proprietary API for each TV card. It was impossible for a 3rd party application to support more than a few cards and usually you would only be able to use the software that came with the card. For Linux, if your hardware was supported, a common API was exposed. Thus any 3rd party application could take advantage of any TV card. This is why projects like MythTV were made possible and accessible to many.

Alas, I can see why a system designed over 20 years earlier, might not fit the needs for modern cameras and computational photography.

We have seen so many changes in these 20 years, that having a new library to interface with camera sensors, hardly seems something to fret over. ;)


Wow 480 vertical pixels is not bad when you consider current generation M1 macs have merely 720 vertical pixels in its built in web cam.


Oh, 640x480 wasn't the streaming video resolution, that was the single-shot mode.

Products of the era would provide 30fps at 176x144, over the parallel port [1].

[1] https://www.ebay.co.uk/itm/392263252707


v4l2 came out in 2001. At that point every TV tuner card supported full framerate capture at 768x576/720x480 for over >3 years already. Its only ~12MB/s and cards supported bus mastering, the only problem was storage and CPU power for realtime encoding.

https://linuxtv.org/wiki/index.php/Brooktree_Bt848

https://www.linuxtv.org/wiki/index.php/Bttv_devices_(bt848,_...


Ok in that case I’m pretty sure the macbook camera is also limited to 720 for stills.


Also known as QCIF format


libcamera uses v4l2 (among others). I don't understand what you mean.


It's interesting seeing how camera support is the biggest gaping hole in the open source smart phone world.

For example, almost everything in this graph is green, except for the camera column:

https://wiki.postmarketos.org/wiki/Devices

It's a complicated problem:

https://blog.brixit.nl/pinephone-camera-part-2/

But, in a world where open source almost always exceeds the commercial offerings, I'm surprised this is the case. It seems like the parent article and the LWN article posted (https://lwn.net/Articles/904776/) are suggesting that the trend is to just dump raw data and let software process it. I'm doubly surprised there aren't good libraries in the open source world that do this so much better than the commercial ones.

This is the main reason I feel stuck inside the duopoly of Android (barely tolerable and hostile) and iOS (completely unusable and unhackable). I need a good camera on my phone and there aren't good options for that at all.

Maybe extracting this code outside of gstreamer is a good idea if this takes us closer to that goal.


Your post is only partially correct. Even with completely open firmware, or at least a documented interface, the biggest roadblock would become be abysmal state of OSS digital image processing. Engineers at major smartphone and camera companies are paid top dollar to improve how the images are processed, this is no laughing matter and is obvious when you compare the image quality from a no-name chinese brand with Samsung or Apple even though they are using the same exact sensors.

FOSS smartphones such as the Pinephone would then need a whole bunch of accelerators to perform such computations because the general purpose CPU would be too slow for that, and image could take seconds to finish processing and get saved in the gallery. But at that point Pinephone itself would not have enough expertise for such a design and everything would crumble.


> is obvious when you compare the image quality from a no-name chinese brand with Samsung or Apple even though they are using the same exact sensors.

This is an area where FLOSS has an opportunity to shine. I think many of these algorithms are described in scientific papers and considering FLOSS is much more collaboration-prone, I'd really expect the best algorithms (except for the ones that require much training data) to soon be implemented. An example of a success case: AV1.


and who's designing and manufacturing and programming the accelerators needed for those algorithms to perform close to real time?


In the case of AV1, many of the members of the consortium.


... to add to your point, then take companies like Google who rely heavily on ML to improve the quality of the photos further and the distance between "RAW data off sensor" and "the best Samsung/Apple/Google can generate" is a HUGE gap.


As per their website (https://libcamera.org/):

>An open source camera stack and framework for Linux, Android, and ChromeOS


Also useful is the README in their git repo: https://git.libcamera.org/libcamera/libcamera.git/tree/READM...


Kieran Bingham, who pushed the release, recently did a talk at ELC-E 2022 about libcamera: https://www.youtube.com/watch?v=WMrezh0ij4M

Slides: https://elinux.org/images/6/60/Application_support_with_libc...


This article might help shed some light on why people have been working on libcamera: https://lwn.net/Articles/904776/

tl;dr: many OEM vendors are moving toward creating "dumber" cameras, with the hardware just exporting a raw data stream and all the image processing logic is done in software. The existing v4l2 stack in Linux is limited and not ready for such a change; also it would be unpractical to put all this new logic, often proprietary and with restrictive patents, into the kernel.

Quoting Laurent Pinchart referenced in the article:

> "Given the direction the industry is taking, this situation will become increasingly common in the future. With the notable exception of Raspberry Pi who is leading the way in open-source camera support, no SoC vendor is willing today to open their imaging algorithms."


> many OEM vendors are moving toward creating "dumber" cameras, with the hardware just exporting a raw data stream and all the image processing logic is done in software

So it's WinModem [0] time again, but with AndCam?

[0] https://en.wikipedia.org/wiki/Softmodem


That's indeed what I was thinking of. WinModems were an absolute pain when I started using Linux two decades ago, often outright impossible to use unless you were running Windows, and adding a ton of CPU overhead.

I wouldn't be surprised if we'll have the same situation again, with some laptop cameras that just won't work at all under Linux.


>tl;dr: many OEM vendors are moving toward creating "dumber" cameras, with the hardware just exporting a raw data stream and all the image processing logic is done in software.

weird, I understood it the total opposite - with vendors shipping more and more sophisticated solutions with embedded dedicated processors doing everything inside a black box (IPU6). You dont get ANY access to raw data stream, and no access at all if you cant talk to those black boxes.


It's a mix of both on various stages of the pipeline. Cameras get dumber; undocumented ISPs with blobs to talk to them get more complex.


I don't get it. Why create a new, competing library instead of working with the devs of the existing library?

From the faq: "We see libcamera as a continuation of V4L2." Ok, why not just contine v4l2?

Or why not work with gstreamer?


In my opinion, video4linux is already a decent kernel API which exposes cameras to userspace. However, as embedded camera setups are becoming more and more complex, with more and more programmable image processors and configurable subdevice node graphs, the userspace code to interact with the mess is becoming more and more complex. My understanding is that libcamera builds on top of the video4linux kernel API, but presents a much simpler interface.

And they already work with gsteramer. Libcamera provides a gstreamer plug-in. But because libcamera is also a separate library, people who don't want to add all of gstreamer as a dependency can still get the benefits.

I'd love it if people who are more involved could correct me if anything I said is inaccurate.


Ok, a simpler library on top of, and in collaboration with, the existing library I can get behind. Perhaps they should add this to their FAQ.


From the project documentation there's no indication of it being based "on top of" V4L2, in fact the exact opposite. libcamera has compatability layers that add V4L2 and "Android Camera HAL v3" (which I'm not familiar with) on top of libcamera.

https://libcamera.org/docs.html


A lot of software uses the video4linux kernel interface directly. That v4l2 compatibility layer is to make those applications go through libcamera even though they think they're using the kernel interface directly.


Exactly - that's the definition of a compatibility layer!

If you need to use a camera that's supported by libcamera, but not natively by v4l2, and prefer to use the v4l2 API, then libcamera's v4l2 compatability layer will let you do that.

But it seems the goals of libcamera are quite different than v4l2. v4l2 (which I used to use years ago) seems more about supporting the minimum common feature set of cameras - basic video streaming, while libcamera appears more about supporting the greatest common feature set - with CPU implementation of features where necessary (hence the Mesa comparison).


The thing is, libcamera uses video4linux, because video4linux (together with the rest of the media infrastructure userspace API[1]) is how camera devices are exposed by the kernel. It's just that not all cameras are as simple as "find the /dev/videoX device and throw ioctls at it" anymore; you need to set up the configurable subdevice node graph (also using video4linux APIs), and maybe the frames the device gives you need post-processing, etc. There's nothing in libcamera which you couldn't do with video4linux APIs + userspace code, since libcamera is itself just something which sits in userspace and uses video4linux APIs.

The compatibility layer simply makes applications which use video4linux in the "find the /dev/videoX device and throw ioctls at it" way work even with cameras which are much more complicated to set up and configure. But it just replaces one usage pattern of the video4linux kernel interface with a different, more complex usage pattern of the video4linux kernel interface (and the media controller kernel interface) + potential userspace processing.

So libcamera is absolutely "on top of" video4linux is my point.

[1] https://www.kernel.org/doc/html/latest/userspace-api/media/i...


OK I'll take your word for it (libcamera using v4l2), but are you sure there arn't also any cameras supported by libcamera directly? There must be some reason for libcamera's v4l2 compatability layer ...

I hadn't heard of libcamera until reading this story/thread, and it seems to provide useful functionality, so I'm trying to understand the amount of apparent hostility there seems to be towards it.. saying it should be part of v4l2. It certainly seems to provide added functionality "over and above" v4l2, and makes sense if it uses v4l2 for lower level camera access, although from a user perspective that really makes no difference.


To answer the first part: There's a lot of software out there which iterates through `/dev/videoX` devices, opens one of them (maybe by presenting the user with some UI to choose between them), and then just starts using it. You can query which pixel formats are supported and set one, you can query the range of supported frame rates and set a frame rate, you can query and set resolutions, etc. by interacting only with the /dev/videoX device you chose. However, this only works with some cameras; notably, it works with USB webcameras and most laptop webcameras.

But in the embedded/phone world, and with some recent laptops, this can't work. There is simply no single device which is "the" camera. The camera system is a complex pipeline of different image processing nodes. Here's how the graph looks on the hardware I usually work with: https://i.imgur.com/NSmu4Tj.png. The actual camera is the ov5645 node at the top, but that's not really useful in itself. You need to configure the media graph; in this case, I have set it up as ov5645 -> csiphy1 -> csid1 -> ispif1 -> vfe0_pix, and finally, /dev/video3 is a video4linux device (as opposed to a "subdev", which the others are) which an application can interact with using the video4linux IOCTLs. And importantly, you can't configure things like resolution, cropping, etc. on /dev/video3; you have to use the media control API to configure the ov5645 node's resolution, then you can additionally configure scaling and cropping and other forms of image processing on the vfe0_pix node. If you want to access your frames without doing any processing, you can instead link up the ispif1 node to a vfe0_rdiX node (RDI = Raw Dump Interface). My graph is relatively simple because only the vfe0_pix node does actual configurable image processing, but there's no reason the hardware couldn't be structured in a way where different nodes do different kinds of processing.

This is the stuff libcamera understands, and the stuff which normal "open a /dev/videoX device and throw ioctls at it" style application doesn't understand. The goal is for libcamera to figure out what graph it has to build to do what you want, to figure out what image processing operations are available at your various nodes, maybe apply various post-processing steps such as debayering if that's not handled by the image processing hardware, etc.

So the value of the compatibility layer is that an application which expects to just care about a /dev/videoX device can have its API calls intercepted, and libcamera will basically try to make it look to the application as if it's talking to a simple device where framerate/resolution/pixel format control/post processing/etc is all handled by the /dev/videoX device. In the background, libcamera will build the media graph, configure the graph nodes, do post processing, and whatever else is necessary to make the camera work. It basically makes a complex camera system look to the application as if it is a simple USB webcam.

I hope some of this makes the problem that's being solved here a bit more clear.

As to why people are hostile to it, I'd just chock that up to the general culture in Hacker News being of making fun of and demeaning other people's work, especially when they don't understand it.


Libcamera does work with gstreamer.

Libcamera is a glue framework between proprietary IPA-packaged code and v4l2 subsystem. Many of the tasks you'd want to use libcamera for are outside the scope of v4l2.

API glue: https://libcamera.org/api-html/namespacelibcamera.html


I don't know the reasons of this specific group or person, so I'm obviously not speaking for them. When I work on something I like to do it my way. I don't think I or anyone else has the moral obligation to donate their time to existing projects, just because we have similar interests or goals.


[flagged]


This is a case of "We already have a good kernel interface for interacting with cameras (video4linux, but the userspace code which uses that kernel interface is getting increasingly complex as hardware evolves. Let's factor that complexity out into its own library."


libcamera: from the people who made v4l2 a mess


Interesting statement from the author of megapixels https://git.sr.ht/~martijnbraam/megapixels


That's both uncharitable & incorrect


The people who downvote this comment probably have not used libcamera. All the smart pointer stuff makes it needlessly difficult. Ironically, the first example on how to use libcamera that I could find leaks memory.


from the name I guess v4l2 is the second version of something? So third time's the charm?


Video 4 Linux, and yes it's the second version. v4l was added to the kernel in the late 90s, and v4l2 replaced it in the early 2000s.

https://en.wikipedia.org/wiki/Video4Linux


libcamera is intended to fill the same role for cameras & image processors as mesa has been filling for GPUs.


I was a little surprised to see Bugzilla being chosen for a new project. I have historically not enjoyed using it. Maybe it's better now.


I always feel odd, why v0.0.1 not v0.1.0? How can you start with patch if there was nothing before?


That's a great way to put it if they are 1-indexed but if they are 0-indexed it might make sense.

The empty-set / no-code state is what the project starts with and it has no versioning as it is inherited by all projects.

And then 0.0.1 is a change that introduces no backwards incompatibilities.

But we are slicing the cheese pretty thin at this point.


These projects are so important for being able to run the software we want, on the hardware we want. While other comments have pointed out that cameras are an inflection point for FOSS, I think they've managed to understate it. For all the commenters asking about why this should exist, please read this exchange with Greg KH on LKML discussing the new Intel IPU/MIPI cameras [1]. There are intel hardware devs who explicitly state the proprietary stack exists because currently functionality is not functional. libcamera is specifically pointed out as the necessary project to get camera support a future in the kernel and to incentivize FOSS code for camera interop. The hardware is currently a new frontier of binary blobs, proprietary APIs and weird shims between kernel, drivers, software, and userspace. The thing about Intel is that, like Apple and unlike AMD, they've realized that a computer is more than a CPU. It's a platform. And the functionality and desirability of your platform depends on the laundry list of quality features you deliver... not just GHz and FLOPS. So they're trying to be Apple in their own way much how Apple has succeeded in be(at)ing Intel. Interestingly, in my opinion, quality open source software for their platforms could be the dark horse for their success.

And this could not have come up on HN at a better time. Just a few days ago I bought my first ever new laptop (!) for research, an X1 Nano Gen2. I've been a Linux user on old Thinkpads my entire life so I naively bought this hardware on a lark because it filled a very specific niche for me. I zfs sent my OS from my old Thinkpad and tweaked a few things and everything worked phenomenally well. Then I went to try out the fancy webcam... I still haven't managed to get the mish-mash of code they offer to work for the Nano2 OV2740 camera sensor that's being read through Intel's new Alder Lake integrated "IPU", although some people have had more success on Dells [2]. Which is sad given how necessary a webcam is these days.

In the spirit of hackernews, if anyone can point me in the right direction for getting the camera working. I'm all ears. Currently the module/firmware refuses to load after compiling the six different libraries necessary for support. I've been working on getting enough information together to open an issue on the github [3]

[1] https://www.spinics.net/lists/kernel/msg4467429.html

[2] https://bbs.archlinux.org/viewtopic.php?id=277462&p=3

[3] https://github.com/intel/ipu6-drivers


The description is a bit nebulous and I am left with a bunch of questions.

> a camera stack that is open-source-friendly while still protecting vendor core IP

What is a "camera stack"?

A software library (perhaps with some binary modules)? Or does it extend to hardware as well? Like some sort of spec "here's what each IO PIN number should do".

Also, what type of cameras are we talking about? Is this for built-in cameras on embedded devices (such as phones or laptops)? Is this for people connecting their reflex camera through USB in order to download their high quality pictures? For all of them?


Check out this post to understand where libcamera fits into the picture.

The Raspberry Pi Foundation replaced the MMAL (another camera framework) stack with a libcamera stack.

https://www.raspberrypi.com/news/an-open-source-camera-stack...


Any reason they chose to use meson instead of make?


All C projects started in the last ~2 years that I'm aware of use meson.

Besides, make alone can't do the things that they're using meson for. They'd need at least a configure script, or autotools. Meson is preferable to that.


Anyone have any insights on the significance of this specific release?


if you're going to use Qt, why not just use QtMultimedia which already supports cameras in more ways than this.


libcamera doesn't use Qt (aside of some example apps), and QtMultimedia doesn't support cameras in a sense libcamera does at all (QtMultimedia goes on top of libcamera on the software stack).


I'm always surprised with the bland and uninspiring names choosen by FLOSS projects.

Anyway, good idea of a project !

Anyone understands the difference with Apertus AXIOM ? https://www.apertus.org/fr/en


You should not be surprised. Choosing 'fun' names is subjective. This is a tool, not a disney ride. Not everything needs to trigger dopamine rushes and it's ok to have clear, straightforward names in software, seeing as naming things is hard.


Personally I don't want fun names, but I wish people used unique, writable, pronounceable names. Try to search for "camera" and see if you find this project. On the other hand, v4l2 is pretty unique. If all text editors are called "TextEditor", maybe the name says what it is, but the name is useless. For a company, you just use the company name ("Microsoft Word", assuming that the company name is unique enough), but for an open source project...


Your argument is one heck of a strawman. Why on earth would you use a search engine to search for "camera" and be disappointed that libcamera didn't make the front page of your results?

Searching for "lib camera", "libcamera", or "camera lib" returns this project as the top result.


Sorry I meant it as a general comment for project names, not for this one in particular. And it is just my preference: given that "names that say what it does" don't really help me discriminate projects in general, I feel like "unique names" would be better. And if, like many company, a ton of money will be put into infinite brainstorming sessions by people with a good salary, I feel like maybe just going for a fun, unique name would be cheaper and not less effective.

"Apple" does not return the fruit in the top results, but I wouldn't say it's a good name for a new project in 2022. Same for "word", I wouldn't advise you call your project like this, even if the top results are "Microsoft Word" and nothing related to the actual meaning of the word.

I come from an industry where everyone uses the same prefixes/suffixes (related to the industry) together with names that "mean something". As a result, it's never really unique, and the only way to differentiate them is to always name them together with the company name. There is one in particular where the company name says what their product does, and it is similar to another product. For that one I _always_ need to Google it and go check the geographical location of both companies, because that's how I differentiate them: the names (and in this case the company name) do not help me.


I love bland and uninspiring names. What's libcamera? A library for dealing with cameras, I'd wager. What's AXIOM? Not sure... My first guess would be a proof assistant.

It's also not just FOSS, Apple for instance has many ThingKits(HealthKit, UIKit) but also TextEdit.


100% agree! Contrast with "Pokemon or Big Data" https://pixelastic.github.io/pokemonorbigdata/


The same can be said about anything related to MLOps. The naming schemes are so out there but they all end up sounding similarly... generic. It's kind of weird really


I don’t, and here’s why: If libcamera turns out to be a bust, then we’ve burnt that name. Now the next library will have to be called something like libeyeopener, and I’ll have to spend years telling people “although libcamera sounds like the ‘official’ camera library, the one you actually want is libeyeopener.”

“Fanciful” names don’t squat on valuable namespace and allow different approaches to duke it out without one of them getting the advantage of sounding like the “official” one.


If the libs are called Barracuda and Seagull you'll have to spend years telling people "although you have heard of Barracuda, the lib you want to use now is Seagull", I don't see that as very different. People don't make library choices based on the name only, they search for what is recommended online and look at examples/features/etc.

I do concede that all else being equal people will gravitate towards the more "official" sounding library, but if all else's equal, it's not really an issue.


Even if that were to happen, it wouldn't be the first time the open source community had to find a name for something to differentiate it from previous efforts.

Choices like libname2, libname3, libname-ng and plenty of others have been used by projects before.

If anything the poor state of search on the internet is a problem.


Its not a true HN thread about a FLOSS project without gall towards the name.


I have to repost a comment: "I remember a friend of mine arguing prefixing names with g, q and k was a reason linux on the desktop failed. I asked "So, how do you explain the success of ipod, iphone, ipad and itunes?""

People complaining about influence of the name of a project on its success should publish unbiased statistics to base their arguments on.


Apple as creative names like. Mail, Pages, Numbers, Calendar, Photos... I actually think this is great for most users. What makes more sense for a user that knows very little about computers use an app called Mail to check the emails or one called Thunderbird.

The only place where this convenience fails is for google searches.


"Is Numbers a calculator? Or an accounting app maybe?"

"Is Calendar the one I just installed, or is it the one Apple is forcing me to use?"

"Wait, so there is an app called Mail that I cannot remove, but my e-mail app is called Thunderbird? Let me note that."

1) People will learn the name of the apps they use, it's not that hard 2) Not all browsers can be called "browser".


libcamera isn't firmware for cameras. It's a userspace framework for enabling and abstracting away the terribleness of v4l2 & quirky hardware.


Don't look at my project names[0], then...

[0] https://github.com/ChrisMarshallNY#here-on-github


Agreed. More examples :

- libpipi (why such a boring name for a library that I'm guessing helps you take a piss?)

- libcaca (why such a boring name for a library that helps you take a shit?)

- Thunar (what an utterly uninspired name for software that 100% accurately behaves like a Norse god)

- Flameshot (Another fire shooting application with a boring name)

- Zathura (I hate that from the name alone I can tell that I'm gonna be pulled into an intergalactic space adventure. Why not something random?)


On the upside, the name tells you what it is and does.


Well, not really. It deals with a portion of what you need to build a camera, so which portion exactly is it ? And now if someone else creates another FLOSS camera library / toolkit they're going to call it libcam and we'll all get confused about who's who


> And now if someone else creates another FLOSS camera library / toolkit they're going to call it libcam and we'll all get confused about who's who

You identified a problem with ALL names. If someone developing a new library in the same general area (library for cameras) chops three letters off the end of an existing library to name their project, that is the second project's fault, not the first project's.


Well you need to think about the whole thing before coming up with a name. I don't care about blaming people here


> I don't care about blaming people here

Says the person blaming the creators for their name.


I only said "I'm always surprised". I am wondering about this topic, and the replies are helpful




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: