[Ffmpeg-devel-irc] ffmpeg.log.20191030

burek burek at teamnet.rs
Thu Oct 31 03:05:01 EET 2019


[01:54:53 CET] <|szn-r|1> does ffmpeg support "pausing" ? is this possible since ffmpeg is commandline
[01:57:34 CET] <pink_mist> I'd presume it supports it just the same as any other commandline program, using ctrl+z and regular job control commands of your shell
[01:58:16 CET] <|szn-r|1> pink_mist so pausing is possible using ctrl+z ?
[01:58:59 CET] <pink_mist> like I said: I'd presume so
[01:59:05 CET] <pink_mist> I have no direct evidence
[01:59:14 CET] <pink_mist> because I've never wanted to try
[02:00:24 CET] <pink_mist> it's also not something that ffmpeg needs to do much about, it's your shell that should handle it
[02:00:42 CET] <pink_mist> if your shell doesn't support job control, get a better shell
[02:00:44 CET] <klaxa> it may behave strangely when working with pipes
[02:00:47 CET] <klaxa> i guess?
[02:00:52 CET] <pink_mist> maybe, yeah
[02:01:23 CET] <|szn-r|1> obs-studio support pausing (finally) after all those years
[02:01:37 CET] <|szn-r|1> and obs-studio uses ffmpeg
[02:01:49 CET] <pink_mist> that's not really relevant
[02:02:06 CET] <JEEB> OBS uses the API or generates its own input
[02:02:26 CET] <JEEB> I mean, you can do it fine with the API by just rolling forward. ffmpeg.c can't really do that dynamically
[02:02:32 CET] <JEEB> it was made for file -> file
[02:02:41 CET] <JEEB> the fact it works for a lot of other use cases is a very happy accident
[02:03:03 CET] <JEEB> also pausing a live stream usually means that you are still pushing something out :P
[02:03:08 CET] <JEEB> not that you pause your encoder fully
[02:03:21 CET] <pink_mist> yeah, that's quite a different thing
[02:03:40 CET] <|szn-r|1> i am talking about just "recording"
[02:04:42 CET] <|szn-r|1> is it possible for ffmpeg to do everything that obs-studio can do?
[02:05:21 CET] <JEEB> theoretically: yes, realistically: unless someone wrote code for it already, no
[02:05:34 CET] <JEEB> and I guess you mean ffmpeg.c when you say ffmpeg
[02:05:51 CET] <JEEB> you can do a lot more with the FFmpeg libraries than what ffmpeg.c does
[02:09:37 CET] <|szn-r|1> i mean with ffmpeg.exe
[02:09:51 CET] <pink_mist> you say .exe like you're on windows
[02:10:01 CET] <pink_mist> I don't believe windows supports job control with ctrl+z
[02:10:28 CET] <pink_mist> anyway, the ffmpeg executable binary is what JEEB means when he says ffmpeg.c
[02:14:28 CET] <|szn-r|1> i see
[02:15:02 CET] <|szn-r|1> if obs-studio uses ffmpeg  why is obs-studio so buggy/unstable when using  ffmpeg-custom
[02:15:16 CET] <pink_mist> ask the authors of obs-studio
[02:16:58 CET] <JEEB> OBS uses various back-ends/modules
[02:17:07 CET] <JEEB> I have no idea what it does with ffmpeg-custom
[02:32:41 CET] <|szn-r|1> obs-studio is fine when you use normal setting,  with normal setting you can only use  x264/aac
[02:33:19 CET] <|szn-r|1> but if you want to use any other encoder like vp9/opus  you HAVE to use ffmpeg-custom
[02:37:57 CET] <ltrudeau> Anyone know in what version of RealVideo was directional intra prediction introduced?
[02:38:08 CET] <ltrudeau> RV30?
[02:44:07 CET] <JEEB> too bad the multimedia.cx page is not really great for these https://wiki.multimedia.cx/index.php?title=Real
[02:45:44 CET] <JEEB> rv10/20 seem to base on the H.263 decoder/encoder and be similar? (both are in rv10.c)
[02:46:55 CET] <JEEB> but yea, you might be able to poke pross on the -devel channel or kostya on his blog/e-mail (https://codecs.multimedia.cx/)
[02:47:02 CET] <JEEB> for actual details
[03:03:37 CET] <ltrudeau> JEEB: Yeah, I did not see directional intra in rv10.c
[05:33:25 CET] <AlexApps> Hey, I'm making a FFmpeg command to generate a slideshow, currently I input the frames with a framerate of "1/5" then I apply filters and output with a framerate of 30, which takes quite a long time to process.
[05:34:20 CET] <AlexApps> Is there any way to apply the filters once per input frame and then duplicate that frame to the desired length, cutting the processing time down drastically
[05:44:31 CET] <furq> that's probably already what you're doing
[05:44:55 CET] <furq> if you're doing something like -framerate 1/5 -i foo%d.png -vf your_filters,fps=30 out.mkv
[05:47:35 CET] <AlexApps> does it work like that if the fps is set through -r 30 instead?
[05:53:16 CET] <furq> i assume that happens after filtering but i don't know for sure
[05:54:08 CET] <AlexApps> is there any way to test?
[05:54:24 CET] <AlexApps> as in a way to check how many frames a filter was applied to?
[06:32:32 CET] <KombuchaKip> To decode an audio file and read its metadata, is it necessary to call read_header()? When is it necessary? https://ffmpeg.org/doxygen/trunk/structAVInputFormat.html#a286d65d159570516e5ed38fcbb842d5a
[11:29:54 CET] <BeerLover> I am trying to transcode an mp4 to hls with segment files and index file and trying to upload it to S3. I am using this: https://dpaste.de/30Xr
[11:30:11 CET] <BeerLover> but in the S3 bucket I only see index.m3u8 file and no segment files
[13:01:04 CET] <DHE> BeerLove_: You've split your outputs. when you specify simply "index.m3u8" you've set an output filename and all the HLS options apply there. then you specify a second output of http://....
[13:01:43 CET] <BeerLove_> didn't understand you DHE
[13:06:55 CET] <DHE> ffmpeg lets you have multiple inputs and multiple outputs for a single job
[13:07:04 CET] <DHE> you've accidentally requested 2 output jobs
[13:07:55 CET] <BeerLove_> but 1 mp4 will produce many .ts and 1 .m3u8 files?
[13:08:08 CET] <BeerLove_> my question is how to upload these to s3
[13:08:10 CET] <DHE> yes, but check the working directory you ran that command in
[13:10:39 CET] <BeerLove_> DHE ffmpeg -v trace -re -y -i song.mp4 -profile:v baseline -b:a 320k -hls_time 10 -hls_allow_cache 1 -level 4.0 -hls_segment_filename segment%d.ts -f hls index.m3u8 -method PUT http://\<S3-BUCKET\>/index.m3u8
[13:10:44 CET] <BeerLove_> looks fine to me
[13:10:46 CET] <DHE> I know
[13:10:56 CET] <DHE> https://dpaste.de/ozh9 I modified your paste to break it up how ffmpeg will interpret it
[13:10:57 CET] <BeerLove_> taking 1 input
[13:11:09 CET] <DHE> line 1: global options   line 2: input   line 3: first output   line 4: second output
[13:12:42 CET] <BeerLove_> ok
[13:13:02 CET] <BeerLove_> so removing "index.m3u8" after -f hls will remove 1st output
[13:13:04 CET] <BeerLove_> right?
[13:13:06 CET] <DHE> yes
[13:13:26 CET] <BeerLove_> but still method put won't work correctly
[13:13:43 CET] <BeerLove_> as it will reqrite all segments as index.m3u8 in the bucket
[13:14:09 CET] <DHE> the next thing I'd try is setting the hls_segment_filename to also be an HTTP URL
[13:17:13 CET] <BeerLove_> it doesn't support that right? in docs it's just absolute path or local path of file
[13:22:02 CET] <DHE> "Should a relative path be specified, the path of the created segment files will be relative to the current working directory."
[13:22:10 CET] <DHE> I still think an http URL here is correct
[13:22:20 CET] <DHE> still, doesn't S3 require authentication to do uploading?
[13:24:35 CET] <BeerLove_> DHE yes
[13:25:13 CET] <BeerLove_> That's the next question. Ideally it would be through "aws s3 cp <file> s3://<bucket>/<file>"
[13:25:28 CET] <BeerLove_> how will segments be piped into this?
[13:26:31 CET] <DHE> well your use of -re makes me assume you're going for some kind of real time streaming with S3 as the storage medium
[13:29:09 CET] <DHE> is that correct?
[13:30:43 CET] <BeerLove_> yes
[13:39:17 CET] <snooky> hi all
[13:39:21 CET] <snooky> I am currently trying to create a virtual video and audio device. I already have a virtual video device with v4l2loopback. with ffmpeg I can also send a picture on this device. Now I need the sound virtually. there is snd-aloop. OK. then I send with ffmpeg the picture on the virtual video device and the sound on the virtual audio device. however, I have a delay of 1 - 2 seconds between picture and sound. At the end I build a rtp stream out of the video and the
[13:39:21 CET] <snooky> audio device. but if I send no picture to the virtual device, the rtp server aborts because no stream is there. How do I fix these errors?
[13:41:18 CET] <BeerLove_> DHE yes
[13:48:51 CET] <snooky> DHE?
[13:54:04 CET] <BeerLove_> snooky is DHE the man?
[13:54:43 CET] <snooky> sry
[14:06:19 CET] <DHE> I am A man....
[14:06:41 CET] <DHE> and beerlover has left...
[14:19:33 CET] <DHE> is there a container that is simultaneously streaming friendly and has a minimum of overhead? mp4 with faststart largely works but somehow is larger than my source input
[14:21:14 CET] <tablerice> is your output bitrate higher than your source?
[14:21:45 CET] <DHE> I'm just remuxing existing content for storage in an S3-like system
[14:27:50 CET] <klaxa> what's wrong with mkv?
[14:28:53 CET] <furq> if by streaming-friendly you mean you can stream it to browsers then no
[14:29:01 CET] <furq> other than webm but i assume that doesn't work for you
[14:36:06 CET] <tablerice> I'm trying to strip out an audio stream using negative mapping, but I'm getting an error "unsupported video codec on stream #2"... Is there a way to delete a stream that has an unsupported codec? That's kinda why I'm trying to delete it lol
[14:52:31 CET] <DHE> I mean I can do "ffmpeg -i http://.../filename.ext" and I can expect it to run rather nicely
[14:53:19 CET] <DHE> mp4 is probably good enough, but it's bugging me that the source file still somehow is smaller
[15:12:16 CET] <lilibox> hi
[15:13:59 CET] <lilibox> i would like to know current proper option how to do this: i have got couple of images and want them merge to .mp4 but i want put let say 10 seconds of black frames pre and 15 seconds black frames post
[15:28:39 CET] <snooky> how i can write the audio to the soundcard?
[15:28:59 CET] <snooky> ffmpeg -re -i file.mp4 -f v4l2 /dev/video1
[15:29:24 CET] <snooky> this streams the video to /dev/video1 but how i can send the audio to the soundcard at the same time?
[15:42:20 CET] <kepstin> snooky: you can use multiple outputs with different formats in an ffmpeg command
[15:42:39 CET] <snooky> alsa @ 0x5588052feae0] cannot open audio device hw:1.1 (No such device)
[15:42:39 CET] <snooky> Could not write header for output file #1 (incorrect codec parameters ?): Input/output error
[15:42:41 CET] <snooky> hmpf
[15:45:23 CET] <ponyrider> snooky: could this be helpful https://trac.ffmpeg.org/wiki/Capture/ALSA#Recordaudiofromanapplicationwhilealsoroutingtheaudiotoanoutputdevice
[15:46:41 CET] <ponyrider> basically, configure alsa?
[15:51:24 CET] <snooky> https://nopaste.linux-dev.org/?1267502
[15:51:32 CET] <snooky> https://nopaste.linux-dev.org/?1267503
[15:51:41 CET] <snooky> i can open the rtp stream and see the video
[15:51:45 CET] <snooky> but no audio
[15:54:16 CET] <ncouloute> durandal_1707: Not sure if this is the proper way to upload sample/command but this is a zip of the 2 files and txt file with command in it. https://ufile.io/qs7kdlms  I use zeranoe 4.2.1 32bit static build. Other clips seem fine if I use fps=fps-60000/1001:round=down but they were also off using fps=fps-60000/1001. round=near being the default it seems.
[16:02:39 CET] <ponyrider> snooky: https://www.ffmpeg.org/ffmpeg-devices.html#pulse-1
[16:02:56 CET] <ponyrider> snooky: i dont know what you are trying to do anymore. dont you want to stream to your soundcard?
[16:03:28 CET] <snooky> video to virtual video device
[16:03:36 CET] <snooky> audio to virtual audio device
[16:03:54 CET] <snooky> and then mux a rtsp stream from the virtual video and audio device
[16:06:43 CET] <sine0> I need to do some batch resize and then crop of images, what is best ffmpeg or imgmagick
[16:25:11 CET] <ponyrider> snooky: you should be able to do something like ffmpeg -i NAME -f pulse "device 0" ...
[16:25:40 CET] <ponyrider> providing you are using pulseaudio
[16:27:25 CET] <snooky> this is a root server in a data center
[16:27:36 CET] <ponyrider> so what?
[16:28:16 CET] <snooky> -.-
[16:28:17 CET] <ponyrider> sine0: imagemagick imo
[16:29:09 CET] <ponyrider> you said soundcard
[16:29:16 CET] <snooky> no physical soundcard
[16:29:22 CET] <snooky> no physical graphiccard
[16:29:49 CET] <ponyrider> cya
[16:32:01 CET] <snooky> I have to split the video and send it to a virtual video device and a virtual audio device. and then I have to build the video back from the video and audio together and send it to a rtsp server. that works too. So with the video. but I have no audio.
[16:42:35 CET] <durandal_1707> sine0: ffmpeg, obviously, imagetragick not
[16:46:07 CET] <sine0> lmao, fight!
[16:46:43 CET] <snooky> or pipe
[16:46:50 CET] <snooky> but how i pipe audio
[16:47:04 CET] <snooky> and then use this pipe in another programm?
[16:57:08 CET] <durandal_1707> ncouloute: both input files are CFR, and you use fps with same output fps, so where is bug?
[17:05:59 CET] <ncouloute> My first instinct was not to reencode but if I dont reencoder I get a vfr file. Same with mts files from Panasonic cameras as well. only solution for that is to remux to mts/ts and then do a binary concat. That causes other issues ... but on subject of the original command. If you look at the debug log. the first frame of the second file should produce a pts_time of 16.7253. I go to that frame in the file. it is
[17:05:59 CET] <ncouloute> actually the frame of the video before it. Next line of the log you can see that the Parse_fps moves the frame for whatever reason. I would expect that if the file is actually cfr the fps filter wouldnt need to touch the frame timings?
[17:11:29 CET] <durandal_1707> ncouloute: same happens if you use concat filter instead?
[17:13:25 CET] <ncouloute> yes it did, but I will retest it again.. I dont really like that method because it uses a lot of memory. out of memory error when trying to concat 74 files with 32bit version of ffmpeg. I suppose I can take the jump to 64-bit but that limits me in which machines can use it.
[17:15:52 CET] <durandal_1707> ncouloute: it should not use lot of memory
[17:16:03 CET] <durandal_1707> if it still does with latest master it is bug
[17:17:00 CET] <durandal_1707> ncouloute: basically with concat demuxer i see that one frame is duplicated, and that just in files split
[17:39:29 CET] <snooky> i dont became audio
[17:39:33 CET] <snooky> aaaaarrrrrrrrggggggggg
[17:48:54 CET] <KombuchaKip> To decode an audio file and read its metadata, is it necessary to call read_header()? When is it necessary? https://ffmpeg.org/doxygen/trunk/structAVInputFormat.html#a286d65d159570516e5ed38fcbb842d5a
[17:54:10 CET] <ncouloute> I think I used -r when I tested concat filter...So not a real test...Trying to figure out this filter_complex string is taking a while though. Apparently this file has 4 streams and concat wants me to map them all. :)
[17:56:59 CET] <snooky> now i have audio and a stand picture
[18:18:41 CET] <ncouloute> durandal_1707: So I confirmed same issue happens with concat filter. Had to remove the other 3 streams since I couldnt figure out how to map them. 3 Data stream
[18:19:37 CET] <durandal_1707> ncouloute: the frame is duplicated at split point?
[18:19:53 CET] <ncouloute> yes
[18:20:20 CET] <durandal_1707> can you open bug report on trac?
[18:21:33 CET] <snooky> so
[18:21:37 CET] <snooky> now i have it...
[18:21:52 CET] <snooky> with ffserver
[18:22:00 CET] <snooky> [mpeg @ 0x55a9a3d50130]buffer underflow st=1 bufi=5902 size=7022
[18:22:12 CET] <snooky> but what mean it with this?
[18:22:18 CET] <durandal_1707> ffserver is not available any more in FFmpeg
[18:22:58 CET] <snooky> and what is "Past duration 0.702162 too large?
[18:30:40 CET] <ncouloute> durandal_1707: So interestingly enough. If I convert the fps first then concat its fine... Its only when I concat first then convert the fps do I get that duplication issue. I think that works around the issue although not sure if thats an option when using the demuxer. =/
[18:31:28 CET] <durandal_1707> ncouloute: it is bug, that should be reported and ultimetely fixed
[18:32:59 CET] <kepstin> snooky: the "past duration too large" message is printed when a filter chain indicates it's doing cfr output, but two frames are too close together. there's a bug in the concat filter where this can happen if the second video is higher framerate than the first.
[18:33:14 CET] <kepstin> iirc i had a patch for that
[18:34:13 CET] <kepstin> should be fixed in ffmpeg master, but i don't think that's in a release
[19:10:21 CET] <TechnicalMonkey> testing one two three
[19:10:25 CET] <TechnicalMonkey> alright
[19:10:36 CET] <TechnicalMonkey> I can chat
[19:11:13 CET] <JEEB> yup
[19:12:03 CET] <TechnicalMonkey> so I came here looking for help on using ffmpeg to stream from a capture card, but I've run into problems
[20:05:16 CET] <snooky> ffmpeg -re -i tini2.mp4 -c:v libx264 -vf "fps=25,scale=640:480,setdar=4:3" -async 1 -pix_fmt yuv420p -preset ultrafast -map 0:0 -f v4l2 -vcodec rawvideo /dev/video1 -f alsa hw:1,0,0
[20:05:21 CET] <snooky> how i can add a watermarkß
[20:42:14 CET] <CFS-MP3> I'm trying to create a HLS playlist and segments from a mp4 file (actually final goal is a bit different but let's focus on this for now) but I'm not succeeding
[20:42:32 CET] <CFS-MP3> The output of this is unplayable
[20:42:35 CET] <CFS-MP3> ffmpeg -i "/home/carlos/Videos/bbb_sunflower_2160p_30fps_normal.mp4" -loglevel verbose -threads 0 -an -sn -vcodec libx264 -force_key_frames "expr:gte(t,n_forced*4)" -r 25 -f hls -hls_time 4 -hls_list_size 99999 -start_number 1 -hls_segment_type fmp4 -h^C_fmp4_init_filename "bbb_init.mp4" -t 30 "bbb.m3u8"
[20:42:46 CET] <CFS-MP3> (I've tried a few other things)
[20:54:20 CET] <mifritscher> moin
[20:56:14 CET] <mifritscher> under what circumstances can av_write_frame() crash with a writing memory exception to NULL ? Both the AVFormatConext and the AVPacket are ok, the lter 11k big. The used format is MPJPEG .
[20:56:19 CET] <mifritscher> *later
[20:57:49 CET] <mifritscher> the destination is a listening TCP socket
[20:58:20 CET] <mifritscher> (tcp://127.0.0.1:9722?listen
[20:58:45 CET] <JEEB> oppan gdb time I guess?
[20:59:07 CET] <JEEB> also on lunix I recommend running under valgrind
[20:59:45 CET] <JEEB> --leak-check=full --track-origins=yes
[20:59:55 CET] <JEEB> but gdb first makes sense
[21:00:03 CET] <JEEB> also possibly post code into pastebin or so :P
[21:01:03 CET] <mifritscher> I was afraid of this answer^^ It is a bit ... complicated. I only say JavaCV and Windows ;)
[21:01:39 CET] <mifritscher> it could have something to do with races - if it survives the first send it works fine
[21:02:07 CET] <JEEB> https://kuroko.fushizen.eu/random/gdb_builds/gdb-8.0.1.7z
[21:03:47 CET] <mifritscher> needs 20 minutes - despite having a fast line...
[21:04:08 CET] <JEEB> some people seem to get a weird route apparently
[21:04:16 CET] <mifritscher> I'll try to give you the quite informative crashlog, just a second
[21:06:05 CET] <mifritscher> https://mifritscher.de/austausch/crash.log
[21:07:16 CET] <mifritscher> if you need infos about the parameters I can print them
[21:08:21 CET] <JEEB> yea sorry. also you'll want to have debug symbols
[21:08:50 CET] <JEEB> as unfortunately that sort of java log doesn't really give you much info
[21:10:49 CET] <mifritscher> just another problem *g*
[21:17:38 CET] <mifritscher> as I see only one native frame I hope it crashes fairly at te beginning - so a static debug session can already help
[21:21:02 CET] <JEEB> mifritscher: one thing I can tell about AVPackets is that preferably you use av_new_packet() to initialize one
[21:21:59 CET] <JEEB> and if you need to allocate an AVPacket struct itself, there's av_packet_alloc
[21:22:14 CET] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__packet.html
[21:22:23 CET] <JEEB> both documented here
[21:25:08 CET] <mifritscher> alloc theems(!) to be done via javaCV, but at least it uses av_init_packet() on it
[21:25:36 CET] <JEEB> you're not supposed to know or depend on the size of AVPacket I think?
[21:25:43 CET] <JEEB> no idea, though :P could also be mistaken
[21:25:51 CET] <JEEB> I have example code that just uses a stack AVPacket
[21:26:02 CET] <JEEB> and then initializes it as needed
[21:26:17 CET] <JEEB> also av_init_packet only sets everything but the buffer and the size
[21:26:22 CET] <mifritscher> I don't need the size of AVPacket, right
[21:26:45 CET] <JEEB> av_new_packet will also initialize the buffer to a size you need
[21:26:50 CET] <JEEB> including the buffering
[21:26:55 CET] <mifritscher> (I only acquired the size to see whether it is ok
[21:27:18 CET] <JEEB> that's needed so that optimized writers can be utilized (which might require overread/write)
[21:27:27 CET] <JEEB> *optimized readers/writers
[21:28:50 CET] <JEEB> but yea, I would definitely attempt to a) get debug symbols on whatever binaries you're using if you don't have them already and b) see with a debugger
[21:29:09 CET] <JEEB> also I recommend making a simple thing without threading etc if that's also related in your case :P
[21:29:27 CET] <JEEB> attempt to have a simple case first, then grow complexity as you verify it works
[21:31:19 CET] <mifritscher> the multithreading is the thing which let it break I'm afraid - on simple cases it works fine
[21:32:52 CET] <JEEB> I'd say have one thing handling muxing :P
[21:33:14 CET] <JEEB> you can have multiple threads pushing into the queue if you make the queue thread-safe, but just have one thing handling the muxing :P
[21:34:24 CET] <mifritscher> I'I've one thread which fetches the frames and one which put them into ffmpeg again (aka: a transcoder)
[21:46:03 CET] <mifritscher> ok, the crash is indeed fairly at the beginng of av_write_frame
[21:47:48 CET] <lilibox> hello
[21:48:14 CET] <lilibox> i am going to ask after some hours again
[21:48:19 CET] <lilibox> i would like to know current proper option how to do this: i have got couple of images and want them merge to .mp4 but i want put let say 10 seconds of black frames pre and 15 seconds black frames post
[21:48:37 CET] <lilibox> i found this: https://forums.creativecow.net/docs/forums/post.php?forumid=291&postid=1315&univpostid=1315&pview=t
[21:49:11 CET] <lilibox> but as i stated i am looking for current and most clean solution, anybody can help me? pretty thank you
[21:49:38 CET] <mifritscher> ok, it crashes in the function compute_muxer_pkt_fields
[21:50:17 CET] <mifritscher> (btw, thank you very much for your many debug strings!)
[21:50:28 CET] <lilibox> sometime i need to preview short video pices cca 3 seconds long and some devices blends UI over video, where takes 5 minutes UI disolves/disapears
[21:50:53 CET] <lilibox> 5 seconds sorry :)
[21:51:01 CET] <jemius> Am I missing something or is there only a lowpass filter with at most 40dB per decade? I'd need something with more power ._.
[21:52:15 CET] <mifritscher> ok, now I'll need to dig deeper into the decompiled code *g*
[00:00:00 CET] --- Thu Oct 31 2019


More information about the Ffmpeg-devel-irc mailing list